Report advises how to ‘steer clear’ of bad bots

Bad bots are harming success within an array of organisations from different verticals

According to the ‘2018 Bad Bot Report’, which is based on 2017 data collected from Distil Networks’ global network, “bad bots interact with applications in the same way a legitimate user would, making them harder to detect.”

Bots enable fraudsters, competitors and attackers to perform a variety of malicious actives, enabling high-speed abuse, misuse, and attacks on websites and APIs.

Typically, activities include web scraping, competitive data mining, personal and financial data harvesting, account takeover, digital ad fraud, spam, transaction fraud, and more.

The hardest part about bot behaviour is identifying its origin, according to the report.

Furthermore, bringing legal recourses against bot operators can be extremely costly and time-consuming, and if bot operators are in another country, the laws offer no guarantee of success.

What makes bad bots ‘bad’?

  • Every business with an online presence is regularly bombarded by bad bots on its website, APIs, or mobile apps.
  • Unchecked bad bots cost businesses money every day. Different from the problem of data breaches, which are somewhat rare, automation abuse happens 24 × 7 × 365 because bad bots never sleep.
  • Bad bots are on your website for a bad purpose. Understanding what that purpose is to help you address the problem.

Despite this, the report found that firms’ ignore bots simply because they don’t understand the havoc they cause.

Distil Networks created the following industry standard system that classifies the sophistication level of the following four bad bot types:

  • SIMPLE: Connecting from a single, ISP-assigned IP address, this type connects to sites using automated scripts, not browsers, and doesn’t self-report (masquerade) as being a browser.
  • MODERATE: Being more complex, this type uses “headless browser” software that simulates browser technology—including the ability to execute JavaScript.
  • SOPHISTICATED: Producing mouse movements and clicks that fool even sophisticated detection methods, these bad bots mimic human behaviour and are the most evasive. They use browser automation software, or malware installed within real browsers, to connect to sites.
  • ADVANCED PERSISTENT BOTS (APBS): APBs combine moderate and sophisticated technologies and methods to evade detection while maintaining persistence on targeted sites. They tend to cycle through random IP addresses, enter through anonymous proxies and peer-to-peer networks and are able to change their user agents.

According to the report, the easiest way to prevent bad bots from hitting your website is to block out-of-date user agents from gaining access.

Steering clear of bad bots

Distil Networks also recommends the following to help steer clear of bad bots:

  • BLOCK OR CAPTCHA OUTDATED USER AGENTS/BROWSERS: The default configurations for many tools and scripts contain user-agent string lists that are largely outdated. This step won’t stop the more advanced attackers, but it might catch and discourage some. The risk of blocking outdated user agents/browsers is very low; most modern browsers force auto-updates on users, making it more difficult to surf the web using an outdated version.
  • BLOCK KNOWN HOSTING PROVIDERS AND PROXY SERVICES: Even if the most advanced attackers move to other, more difficult-to-block networks, many less sophisticated perpetrators use easily accessible hosting and proxy services. Disallowing access from these sources might discourage attackers from coming after your site, API, and mobile apps.
  • PROTECT EVERY BAD BOT ACCESS POINT: Be sure to protect exposed APIs and mobile apps—not just your website—and share blocking information between systems wherever possible. Protecting your website does little good if backdoor paths remain open.
  • CAREFULLY EVALUATE TRAFFIC SOURCES: Monitor traffic sources carefully. Do any have high bounce rates? Do you see lower conversion rates from certain traffic sources? These can be signs of bot traffic.
  • INVESTIGATE TRAFFIC SPIKES: Traffic spikes appear to be a great win for your business. But can you find a clear, specific source for the spike? One that is unexplained can be a sign of bad bot activity.
  • MONITOR FOR FAILED LOGIN ATTEMPTS: Define your failed login attempt baseline, then monitor for anomalies or spikes. Set up alerts so you’re automatically notified if any occur. Advanced “low and slow” attacks don’t trigger user or session-level alerts, so be sure to set global thresholds.
  • MONITOR INCREASES IN FAILED VALIDATION OF GIFT CARD NUMBERS: An increase in failures, or even traffic, to gift card validation pages can be a signal that bots such as GiftGhostBot are attempting to steal gift card balances.
  • PAY CLOSE ATTENTION TO PUBLIC DATA BREACHES: Newly stolen credentials are more likely to still be active. When large breaches occur anywhere, expect bad bots to run those credentials against your site with increased frequency.
  • EVALUATE A BOT MITIGATION SOLUTION: The bot problem is an arms race. Bad actors are working hard every day to attack websites across the globe. The tools used constantly evolve, traffic patterns and sources shift, and advanced bots can even mimic human behavior. Hackers using bots to target your site are distributed around the world, and their incentives are high. In early bot attack days you could protect your site with a few tweaks; this report shows that those days are long gone. Today it’s almost impossible to keep up with all of the threats on your own.

Written by Leah Alger