Artificial intelligence is behind a significant surge in sophisticated bad bot traffic, which went from bad to worse in the first quarter of this year. Instead of human net surfers, these bad bots generated nearly half of all web traffic.
AI-driven super bots comprised 33% of observed activity and employed advanced evasion techniques to bypass traditional detection tools. These top-level automated attacks on e-commerce revenue, customers, and brands generate increasingly steep financial losses and network security breaches.
On May 30, bot defense developer Kasada released its automated threats quarterly report for January through March 2024. The report shows a strategic shift toward more organized and financially motivated online fraud activities. It illustrates how adversaries use a blend of existing and new solver services and advanced exploit kits to bypass traditional bot mitigation tools effectively.
Bots generating 46% of internet traffic is not surprising. What is unexpected is that nearly one-third of those bad bots have been classified as sophisticated types, remarked Nick Rieniets, field CTO at Kasada.
“It indicates that bots are becoming increasingly advanced to overcome increasingly sophisticated bot defenses. Fraudsters are taking advantage of tools, such as highly customized versions of Google Puppeteer and Microsoft Playwright, to develop these automated threats,” Rieniets told the E-Commerce Times.
Escalating Fraudulent Online Transactions
The Kasada report highlights primary shifts in bot operations compared to previous quarters. The primary goal of the Quarterly Threat Report is to equip cybersecurity and threat intelligence professionals with the critical information needed to understand and counteract current attack vectors.
The new sophistication and coordination of automated cyberattacks show four key observations:
- Advanced solver services can automatically bypass Captcha and other human verification methods. They use machine-learning algorithms and human-assisted solutions that mimic legitimate human interactions.
- New and updated exploit kits target vulnerabilities in web applications, APIs, and third-party integrations. These automated processes enable attackers to launch large-scale assaults with minimal effort. They increase the efficiency and scalability of attacks to pose a significant threat to organizations that rely on legacy security measures.
- Bots are designed to masquerade as legitimate traffic by mimicking human behavior and simulating mouse movements, keystrokes, and other user interactions to evade detection. This approach indicates a shift towards using bots for organized online fraud.
- Bad bot builders plan upcoming account takeover campaigns and arbitrage opportunities in online underground forums. These forums are hotbeds for selling automated tools and services that facilitate these activities. This strategy lowers the entry barrier for bad actors, increasing the frequency and scale of automated attacks.
“We are seeing people with very low skill levels develop bots. Additionally, organizations providing public LLMs use web scrapers aggressively to train their models. So, this has become a major concern for many businesses today,” observed Rieniets, adding that cybercrime-as-a-service is also a contributing factor.
“Today, they can just buy [bots] and deploy them at will. Some of them, such as all-in-one or AIO bots, are even automated to conduct the entire process from start to finish,” he said.
Geographical Breakdown
Analysis of bot activities reveals hotspots in regions with high adversarial activity, including the United States, Great Britain, Japan, Australia, and China.
Technology Fuels Bad Bot Availability
Rieniets is not surprised by the surge in bad bot traffic. Things have worsened as the sophisticated bots originally developed for purchasing sneakers online are being repurposed to conduct fraud and abuse for broader retail, e-commerce, travel, and hospitality segments.
Moreover, bots are a cost-effective, scalable way to generate profits with fraudulent techniques like credential stuffing and reselling cracked accounts and abusive tactics such as automating the purchase and resale of highly sought-after items, such as electronics and sneakers.
“Accessibility of better bots leads to even bigger profits,” he added.
A related problem is account takeovers (ATO) because consumers use the same login credentials for various accounts. Fraudsters exploit this by using stolen credentials to launch credential-stuffing attacks.
“But consumers alone are not to blame. Many companies still rely on ineffective anti-bot defenses that cannot detect automated abuse against their customers’ account login,” he said.
The Cheap Cost of Committing Cybercrime
Most surprising for Rieniets is that the average price of a stolen retail account is only $1.15. These are often worth a lot more for those willing to commit fraud, he opined.
For example, fraudsters can make unauthorized purchases and redeem loyalty points with these stolen accounts. Given how inexpensively and easily they can obtain stolen customer accounts online in marketplaces and private Discord and Telegram communities, they can make enormous profits, he explained.
Bot attackers have solved traditional anti-bot defenses and Captchas. They can buy solver services that cost less than a penny per solution. This minuscule expense tips the scales in favor of the attacker because it makes attacks very inexpensive. Meanwhile, the defenders spend lots of money in mitigation attempts and cannot pivot as quickly, Rieniets said.
“A lot of what we observe with stolen accounts can be attributed to outdated anti-bot defenses where the operator has retooled, and the customer often is not even aware they are being bypassed,” he noted.
The solution for defenders is to increase the cost for adversaries to attack and retool, according to Rieniets. Modern anti-bot defenses can adapt their defenses, so they present themselves differently to the attacker every time.
This approach frustrates and deceives attackers. It makes it incredibly time-consuming and expensive to attempt to succeed. In doing so, these modern tools remove attackers’ ability to make an easy profit.