Bots have evolved rapidly over the years and, with the new era of Artificial Intelligence (AI), they continue to progress. A report shows that in 2022, 47.4% of all internet traffic came from bots. This is an increase of 5.1% compared to the previous year. Meanwhile, human trafficking, reported at 52.6%, was the lowest in eight years.
Internet bots are often associated with harmful or suspicious activities, such as distributed denial of service (DDoS) attacks or the spread of misinformation on social media. However, bots are also widely used for task automation and efficiency. Therefore, it is necessary for companies to learn to distinguish between the two.
Now that AI has made various tasks, including coding, easier to scale, there is no doubt that cybercriminals will continue to employ malicious robots to attack businesses and disrupt operations. However, good bots should also continue to evolve, driven by the same factor: the progress of AI, and offering undeniable benefits by streamlining tedious manual business processes.
Good bots: intention, behavior, impact
One of the best practices that helps identify whether a bot is good or bad is to look at three key factors: intent, behavior, and impact. A good bot has a legitimate purpose. For example, automating time-consuming jobs or performing tasks that would simply be impossible to do manually, such as collecting large-scale public web data and creating automated data streams in real time.
Good bots follow a certain code of conduct. In general, they positively impact websites and their users by performing tasks such as indexing pages for search engines, helping people find information or compare prices, and identifying malicious activity on the web.
Below are some examples of the most common good bots:
Data automation bots
Web intelligence software collects publicly available data, such as product prices and descriptions for market research, travel fares for price comparison, or brand mentions for trademark protection and counterfeiting purposes. Data automation robots are employed by e-commerce price comparison websites and travel fare aggregators.
Search Engine Crawlers
Also known as web crawlers or spiders, these robots review the content of web pages and index them. Once the content is indexed, it appears on search engine results pages. These bots are essential for search engine optimization. Most sites want their pages crawled and indexed as soon as possible after publication.
Site Monitoring Bots
This software monitors sites for backlinks or system outages. It can alert users in the event of a major change or downtime, allowing teams to react quickly and restore their services without significant losses.
Chatbots
Chatbots are programmed to answer certain questions. Many companies integrate these bots into their websites to ease the workload of customer service teams. The chatbot market is growing rapidly as more and more companies employ generative AI chatbots, and is expected to reach $1.25 billion by 2025.
The bad robots
We can identify bad robots by considering the same three key identifiers: purpose, behavior, and impact. The intention of bad robots is to exploit or harm websites and their users. Its behavior is unethical and in most cases illegal as this software accesses unauthorized pages and performs unauthorized actions such as theft of personal data, DDoS attacks and spread of malware.
Malicious bots typically do not respect the capabilities of the server, overloading it with requests and slowing down the performance of the target site.
One of the most popular “use cases” for bad bots is ad fraud, which aims to generate traffic and false advertising metrics, such as CTR, by employing bots that generate clicks, views, or impressions. Below are more examples of the most common bad robots:
Account takeover bots
Most are familiar with credential stuffing and cracking. These are automated threats that can lead to identity theft or granting illegal access to user accounts. Account takeover bots can make mass login attempts, leading to infrastructure damage or even business losses.
Spam bots
These bots spread fake news and propaganda and post fake reviews of competing products and services. Spambots can also hide malicious content, such as malware, within clickbait links. In more complicated cases, this can lead to fraud.
Reseller robots
While scalping robots have been around for a while, they have become especially active during the pandemic. This software automates a massive purchase of goods or services, resulting in rapid depletion. These items or services are then resold at a much higher price. This is often seen with event tickets or limited edition products.
Legal and ethical implications
Specific tactics, ranging from behavioral analytics to user agent chains and traffic patterns, allow website owners to more easily identify bad bots. Unfortunately, in the age of AI and the rise of commercial robot farms, it's a constant battle. The ethical issues and implications of using bad robots are more than evident. However, legal regulation is still lacking, and bot activity often falls into the gray area.
In 2019, California passed its Strengthening Online Transparency Act (the BOT Act), which requires clear disclosure and transparency for bot use, meaning that bots are not allowed to hide their identities. The BOT Act primarily targets automated software that aims to influence purchasing and voting behavior. However, at least in theory, it can also address the challenge related to bots: misinformation, fake news, and artificially inflated social media metrics.
In the EU, the EU AI Law is expected to address areas such as AI-assisted deepfakes and disinformation. However, as of today it is not yet in force.
Although legal regulation is still obscure, there are explicit legal and financial risks that companies should consider before using bots, even if they think their bots are “good.” For example, a chatbot may give bad advice, which can lead to reputational damage and legal liability.
Even more extreme situations can occur in cases of poor data management. In 2020, the UK's Ticketmaster was fined £1.25 million for a data breach that occurred due to a security breach through its chatbot.
Summary
Distinguishing good bots from bad ones is essential for any business. But the world is rarely just back or white. Some bots may not be inherently good or bad. What pushes them one way or the other is their intention, behavior and impact. If you make sure that the bot you are using has a reasonable and fair intent, respects the website's rules, and does not cause harm, you will most likely be on the good side.
However, examples show that even the most innocent bots can sometimes cause problems, ranging from reputational damage to legal and financial liability due to data mishandling. Therefore, it is vital to know the risks before implementing enterprise robots, regardless of whether they are simple chatbots or complex web intelligence gathering tools.
We have the best AI tools.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: