Challenges of Bot Detection: Keeping Defenses High Without Triggering False Positives
Identifying bots is important and complicated work. Keeping up with ever-changing bot technologies and attack strategies requires deep knowledge and continuous threat research. The outbreak of the COVID-19 pandemic -- already threatening to throw everything off track -- hasn't made it any easier. Consumers are changing their online behavior in radical ways, and cybercriminals are paying attention. Bot attacks have increased in industries like online retailing, media, and gaming -- essentially following the money as consumer habits changed in response to COVID-19. When you notice unusual traffic patterns on a website, you might be tempted to fix the problem, at least in part, by tweaking the rules.
But this adds a different kind of risk. False positives, triggered by rule changes, can have profound implications for the accurate detection of automated/bot traffic from legitimate traffic and can upset real customers who are relying on these online services more than ever. On the other hand, reducing detection sensitivity to decrease the risk of false positives can lead to a flood of false negatives, letting too many bad bots pass through. This can prove disastrous, especially if the bots are conducting credential stuffing or inventory hoarding attacks, which can result in a catastrophic system breach.
The Akamai Crypto Challenge as Action Reduces False Positives and Changes Bot Economics
Crypto Challenge as Action (CCA) challenges a user who is suspected to be a bot. Instead of serving a hard 403 to simply block the bot (and thereby alerting the bot to change its evasion techniques), the CCA serves a cryptographic problem that requires a minimum amount of time to solve. This feature has several significant benefits:
- Sustains a quality human user experience as it requires no human intervention and only requires action from CPUs
- Increases botnet resource consumption, making attacks more expensive for the bot operator
- Reduces the number of requests a bot operator can perform in any given period, disrupting the success of high-request-volume attacks
This feature is now available for Akamai Bot Manager Premier (BMP) customers in broad beta and is a free functional addition for BMP. This feature is designed for customers that require aggressive bot detection capabilities with the lowest possible rate of false positives.
How Crypto Challenge as Action Works
The CCA offers an exciting addition to the fight against bots. It allows customers to issue a complex cryptographic puzzle to users when a request made to an endpoint protected by BMP's behavior anomaly detection is flagged as a bot. On successfully solving the challenge, the real users will be allowed to access the website, while retaining this state, which will allow users to access the site next time the session hits a protected endpoint. Furthermore, customers can define the length of this grace period. These challenges offer a second chance to real users (false positives) to qualify as a human in case aggressive BMP detections have tagged a human request as a bot.
In practice, it means that human users will be served with the challenge only once. The next time, the BMP back end will already have the data to make the decision about the real user's humanness. Bots, on the other hand, will be re-challenged multiple times, and even if the sophisticated bots are able to solve the challenge, the more requests they send, the more expensive it will become to get the information. As an action, the Crypto Challenge augments existing bot detections and provides another mechanism against evasive attackers. But from an attacker perspective, the CCA changes the economics by slowing down the rate of requests while increasing the compute costs required to operate large credential stuffing botnets.
CCA in Practice: How an Airline Found Success Against Sophisticated Bots
A leading airline's website was under the constant threat of crawler and scraper bots that were used by competitors and unauthorized third parties to retrieve valuable information from the company's website. The airline needed to protect its "search flight" endpoints that support deep linking (the use of a hyperlink that links to specific content without having to get to it through the home page). Deep links can create challenging use cases that increase the risk of false positives. But the deep linking was also used by its online travel agency partners in marketing campaigns, social networks, and banner ads.
After onboarding with the Akamai CCA, the airline customer was able to successfully block bots and effectively address the challenge of false positives.
The company's website traffic analysis shows some interesting stats on the key impacts:
- 94.9% of the traffic did not attempt to solve the challenge
- 0.03% of the user response was invalid
- The number of false positives was reduced from approximately 5% to almost zero
The above facts reveal two significant points:
- Most bots were not able to solve the challenge and were blocked.
- Human visitors may be challenged because of the deep link issue but were not blocked, which resolved the issue of false positives. CCA proved to be a good solution in this particular use case of deep linking.
In a nutshell, the Akamai CCA helped the airline customer minimize the risk of legitimate end-user false positives, while also enabling more aggressive detection settings to reduce the potential for false negatives. For traffic identified as human, there was no impact on end users, but for traffic identified as nonhuman, CCA was applied instead of a "deny action," which resulted in a significant reduction of false positives and helped the airline customer ensure its end users were not blocked outright. As an additional benefit, CCA dramatically increased the computing costs for bot operators, making it less cost-effective for them to continue to automate the usage of the customer's website to operate large credential stuffing botnets.