Why CAPTCHA Should Be a Last Resort in the Battle Against Bots

You’ve almost certainly seen and interacted with a CAPTCHA, even if you didn’t know that’s what they were called. A slightly clumsy acronym for Completely Automated Public Turing test to tell Computers and Humans Apart, CAPTCHAs are a test that’s intended to work out whether or not a particular user happens to be human. 

The most common form of CAPTCHA is a wiggly line of distorted letters that the user must make out and then type into an entry field to show that they recognize them. Other variations might ask users to solve simple math problems, listen to a sound, or identify pictures (“click the only picture that does not include a road sign.”) In every scenario, the overall goal is the same: make sure that the website or service being guarded by the CAPTCHA is dealing with legitimate users and not being spammed by bots which could be out to do harm. This could, in turn, help reduce spam comments on articles, stop fake registrations for services, protect email accounts from cyber attacks, or any other scenario where it’s useful to sort the real users from the fakes.

All of this makes perfect sense. CAPTCHAs are a conceptually smart way of getting around a massive online problem by exploiting the fact that, as smart as bots can be, there are still behavioral or ability differences that separate people from machines.

So why should they be a last resort against bots?

The trouble with CAPTCHA

The biggest reason is the most obvious: CAPTCHAs disrupt the user experience. If you’re on a website and want to buy a ticket for a concert, leave a quick comment or download a document, having to answer a question before you do this is simply no fun. Nobody enjoys having to decode a twisted line of text; it’s a pain point that has been put there expressly to get in the way of our performing a certain action with frictionless ease. For the user of a service, CAPTCHAs are a time-waster that slow down what we are trying to do. For the owner of that service, they add a conversion-killing step that could put some people off using the service in question.

But CAPTCHAs can also prove difficult for some users, and thus result in false positives when inadvertently identifying legitimate users as bots. For instance, users who are blind or have other visual impairments may struggle with image recognition CAPTCHAs. Deaf or hard of hearing users may do the same with audio CAPTCHAs. The same is true for users with dyslexia, who may find it difficult to parse a word and then reproduce the letters in the correct order. It turns a tool that was designed to allow humans access to systems on demand into an anti-accessibility headache that blocks some people, while letting others through. 

In other cases, questions may simply be challenging to certain users. A math problem that seems straightforward to one person may be difficult for another. A whimsical CAPTCHA which asks users to finish a famous film quote (“We’re going to need a bigger…” A: Boat, B: Car, C: Lawnmower) is a whole lot less whimsical for a user who hasn’t seen the movie in question. In these cases, the user comes away not only not having access to a particular online tool; they feel as though they’ve been insulted. 

There are technical problems as well. For starters, certain CAPTCHA types do not support every type of browser, meaning that, potentially, large numbers of users could be blocked from using a website or website regardless of their ability to answer the CAPTCHA in question. In addition, some types of CAPTCHA aren’t accessible to users viewing a website with a screen reader or assistive device.

Make CAPTCHAs your last resort

Fortunately, bot prevention methods have advanced in the years since CAPTCHAs were first invented. Rather than purposely placing a roadblock in the path of users, which can have a detrimental impact on the user experience, there are ways to sift legitimate users from fake ones without having to inconvenience the user. These approaches include tools such as Web Application Firewalls (WAF), which monitor visitor behavior to permit legitimate traffic while blocking bots. 

These tools protect system access points, while looking for anomalies in visitor behavior to seek out and block bad bots. That might include an approach like browser validation, behavioral analysis and other measures to ensure that visitors are coming from a good (or, at least, human) place. The best thing about that approach? It’s incredibly rapid and accurate, boasting an extremely low false positive rate. The owners of websites, APIs and mobile apps are protected, while visitors don’t feel like they are being penalized.

CAPTCHAs serve a purpose online. But they should be used as a last resort, not as a first port of call. So, if bad bots are a problem you’re experiencing (or at risk of experiencing), make sure you speak to the experts who can help. After all, cybersecurity shouldn’t be painful — not for the owners of websites and services, or those visiting them.