Learn how to protect your site from bad bots while allowing visits from safe user agents with our all-in-one WordPress security plugin Defender.
Your website’s security is under threat 24/7, whether it’s from a serious DDoS attack, XSS attack, SQL injections, or just annoying spam. Defender’s User Agent Banning not only offers your WordPress site robust protection against requests from bad user agents at the server level, it also helps to free up server resources for all your good traffic.
And it’s all available at no cost (get it for free at wordpress.org).
In this article, we’ll cover:
Let’s dive in…
What Is a User Agent?
Let’s start with this definition from Wikipedia…
A user agent is any software, acting on behalf of a user, which retrieves, renders and facilitates end-user interaction with Web content.
Network servers, email clients, search engines, and web browsers are all examples of user agents.
Essentially, a user agent is a “string” (i.e. a line of text) that identifies a client to a server. In other words, it’s a way of saying “Hello! This is who I am” to a web server.
A web browser, for example, includes a User-Agent field in its HTTP header identifying the browser and operating system to the web server (e.g. Chrome Browser Version 94.0.4606.61 on Windows 10).
The user agent string format for web browsers reads as follows:
Mozilla/[version] ([system and browser information]) [platform] ([platform details]) [extensions]
This allows each web browser to have its own, distinctive user agent and the contents of the user agent field can vary from browser to browser.
When I looked up my web browser’s user agent, for example, I got the following:
This information is useful to a web server, because it allows the web server to serve different web pages to different web browsers and different operating systems (e.g. send mobile pages to mobile web browsers, show different pages to different platforms or operating systems, and even display “please upgrade your browser” messages to older web browsers).
Good Bots vs Bad Bots
Most website owners want their content to be found on the web, especially by search engines like Google.
Google automatically discovers and scans websites by following links from one webpage to another employing user agents called “crawlers”. Google’s main crawler, for example, is called Googlebot.
Most website owners, therefore, would consider Googlebot to be a “good bot” and welcome having this user agent visit their website via their web server.
Not all user agents, however, are good guys.
Unwanted visitors like spammers, scrapers, email harvesters, and malicious bots can also make use of user agents to threaten the security of your information and your website.
Example of Cross Site Scripting (XSS) attack
A user agent name can be modified, by having a link with a malicious JS code in it:
Here is the problem:
- A server will trust the user agent name and store the above string (e.g. in a Web Analytical tool).
- A real user (e.g. an admin) then accesses the tool storing the string.
- When the page with the logs containing the string is opened, the browser will then parse all listed user agents and execute the script. This script can be a simple redirect, or a spammy pop-up.
Defender’s User Agent Banning protects against the XSS attack from security headers by stopping the page from loading when such a User Agent name is detected.
Example of SQL injections
This is similar to the above. A User Agent name can contain an SQL query, for example, a single quote
If the server doesn’t have a high level of protection, it can cause an error, where an attacker can then start experimenting and executing SQL queries.
So, how can you let the good bots in and prevent the bad bots from visiting your site?
This is where Defender comes to the rescue.
How To Set Up Defender’s User Agent Banning
Defender’s User Agent Banning feature lets you specify which user agents you will and will not allow to visit your site.
To access and enable this feature, log into your site and go to Defender > Firewall
Click the button to activate the feature…
You can permanently ban malicious bots and bad user agents from accessing your site by entering these into the Blocklist field (one per line). Defender includes some common bad bots in the Blocklist by default. You can add more bad bots to the list by searching online for “bad user agent block lists”.
Conversely, you can add good bots and user agents to the Allowlist field to allow them permanent access to your site. Defender includes a number of legitimate bots and user agents to this list by default.
Note: If you add the same user agent or bot to both fields, the Allowlist will override the Blocklist.
The Message section lets you customize and preview the message that will display on your site to blocked users throughout the lockout period.
Bots are identified by their IP address and HTTP Header User-Agent. If the HTTP Header User-Agent is missing, this should be regarded as an unusual and suspicious red flag. Often, these come with an SQL injection. In this case, the best option is to block their IP address.
You can block any IP addresses that send Post requests with empty referer and user agent headers in the Empty Headers section. (Note: the word referer is not misspelled.)
Note: Spam bots sometimes do not have a referer or HTTP header, so activating this option will also help prevent spammy form submissions and comments.
Finally, you can easily deactivate the feature at any time if you no longer want to use it.
Remember to click the Save button when done to update your plugin settings.
To view a log of Defender’s activity and confirm that the feature is active and working, select Firewalls > Logs in the plugin’s menu.
No Whiffs or Bots
With Defender’s User Agent Banning feature activated, bad bots won’t even get a sniff in and malicious user agents will strike out every time they visit your site. Defender goes straight to work banning and locking out user agents as per your configured lockout settings.
Additionally, Defender’s continuous monitoring protects your site while saving server resources for legitimate traffic, thus helping to further improve your site’s performance.
For more information or help using this feature, check out our documentation section or contact our 24/7 support team.
Keep reading the article at WPMU DEV Blog. The article was originally written by Martin Aranovitch on 2021-10-13 20:20:26.
The article was hand-picked and curated for you by the Editorial Team of WP Archives.