Using Selenium to Bypass Anti-Bot Detection on Websites Like Facebook

James C. Burchill
3 min readFeb 19, 2025

--

Photo by Growtika on Unsplash

Introduction

Selenium is a powerful automation tool widely used for web scraping, testing, and task automation. However, modern websites — especially social media platforms like Facebook, LinkedIn, and Instagram — implement advanced anti-bot detection mechanisms to prevent automated access. These countermeasures include behaviour tracking, CAPTCHAs, browser fingerprinting, and login verifications, making it increasingly difficult for bots to operate undetected.

This article explores the most common anti-bot techniques websites use, the workarounds available to Selenium users, and how to implement these solutions effectively.

Common Anti-Bot Detection Methods

CAPTCHAs and ReCAPTCHAs

Websites often use CAPTCHAs to verify that a user is human. Google’s ReCAPTCHA, for instance, tracks mouse movements and behavioural data to distinguish bots from real users.

Browser Fingerprinting

Websites analyze parameters such as installed plugins, screen resolution, WebGL data, and user-agent strings to determine if a visitor uses a headless browser or an automation tool like Selenium.

Headless Browser Detection

Some sites block headless Chrome instances by checking specific browser properties (e.g., navigator.webdriver, which is set to Truein Selenium by default).

IP Rate Limiting and Tracking

Frequent requests from the same IP address can trigger security measures, leading to CAPTCHAs, temporary bans, or account suspensions.

Automated Mouse & Keyboard Interaction Detection

Sites monitor how users interact with elements. If interactions are too fast or uniform, they may flag the activity as bot-driven.

Workarounds to Avoid Detection

Using a Real Browser Profile

You can use an existing browser profile instead of launching a new Chrome session every time. This makes the automation appear more like an actual user session.

Implementation:

from selenium import webdriver

options = webdriver.ChromeOptions()
options.add_argument("user-data-dir=/path/to/your/chrome/profile")

# Launch browser with user profile
driver = webdriver.Chrome(options=options)
driver.get("https://www.facebook.com/")

Disabling Selenium Detection Flags

Chrome sets navigator.webdriver = True when running Selenium, which websites use to detect automation. You can disable this flag to make your browser appear human.

Implementation:

options.add_argument(" - disable-blink-features=AutomationControlled")p
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")

Randomizing Mouse Movements and Typing

Simulating actual human behaviour by moving the mouse randomly and typing text instead of pasting it can help avoid detection.

Implementation:

import random
import time

from selenium.webdriver.common.keys import Keys

def random_typing(element, text):
for char in text:
element.send_keys(char)
time.sleep(random.uniform(0.1, 0.3)) # Random delay between keystrokes

Using Rotating Proxies and Residential IPs

Changing IP addresses prevents sites from tracking repeated requests from a single source. Residential proxies help mimic real user traffic.

Implementation:

options.add_argument(' - proxy-server=http://your-proxy:port')

Alternatively, services like Bright Data, Oxylabs, and SmartProxy provide rotating residential IPs.

Avoiding Headless Mode

Many websites block headless browsing. Running Selenium in non-headless mode with a visible UI helps bypass detection.

Implementation:

options.add_argument(“ — headless”) # REMOVE this to run in normal mode

Emulating Human-Like Scroll & Interaction

Websites track scrolling patterns and interaction delays to differentiate bots from humans.

Implementation:

def human_scroll(driver):
last_height = driver.execute_script(“return document.body.scrollHeight”)
while True:
driver.execute_script(“window.scrollTo(0, document.body.scrollHeight);”)
time.sleep(random.uniform(1, 3))
new_height = driver.execute_script(“return document.body.scrollHeight”)
if new_height == last_height:
break
last_height = new_height

Handling CAPTCHAs Automatically

You can use third-party CAPTCHA-solving services like 2Captcha or Anti-Captcha to bypass image or reCAPTCHAs.

Implementation:

import requests

API_KEY = “your_2captcha_api_key”
CAPTCHA_SITEKEY = “site-key-of-captcha”
URL = “https://www.website.com/captcha_page"

# Request CAPTCHA solution

response = requests.post(“http://2captcha.com/in.php", data={
“key”: API_KEY,
“method”: “userrecaptcha”,
“googlekey”: CAPTCHA_SITEKEY,
“pageurl”: URL,
“json”: 1
}).json()

captcha_solution = response[“request”]

Delaying Actions with Randomized Intervals

Executing actions irregularly prevents bots from being detected based on repetitive actions.

Implementation:

time.sleep(random.uniform(2, 6)) # Wait between 2 to 6 seconds

Conclusion

Modern websites use sophisticated anti-bot mechanisms to detect automation tools like Selenium. However, by implementing techniques such as browser profile usage, random interaction patterns, rotating proxies, and CAPTCHA-solving services, you can significantly reduce the chances of detection.

While it is possible to bypass anti-bot protections, it is important to use these techniques responsibly and ethically — avoiding spammy behaviour that may violate a website’s terms of service.

With these best practices, Selenium can be used more effectively for legitimate automation tasks, from data extraction to testing, without triggering security measures that could get your account blocked or restricted.

“A tool is neither good nor bad … it’s all in how you use it.“

--

--

James C. Burchill
James C. Burchill

Written by James C. Burchill

Bestselling Author & Instructor | Helping Solopreneurs Automate, Communicate & Connect Smarter. Get Tech with a Twist free: JamesNewsletter.com

No responses yet