For many, the mention of bots conjures up images of friendly website automations desperate to provide answers. Servile avatars programmed to make life easier.
However, for those who find themselves in a niche cybersecurity corner, “How can I help you?” Is there a small change in the code of 'How can I hurt you?' In the hands of unscrupulous people, bots are increasingly being used for malicious purposes. Its objective? Any brand that transacts with customers using websites, APIs and mobile applications.
An exceptionally exposed attack surface
Serving more than 5 billion people worldwide, if online commerce were a country, it would have the third largest gross domestic product (GDP) in the world at $6.3 trillion. These huge revenue streams are only possible because online businesses automate customer interactions on a massive scale. An untold number of payments, logins, data requests, product searches, and more are collectively driving the inexorable rise of digital businesses.
Unfortunately, threat actors have also noticed the value that flows through these interfaces.
By using malicious automation, threat actors compromise this exposed web attack surface. Attackers use bots with sophisticated custom capabilities, effectively arming themselves with an army of fake website users capable of operating with extraordinary precision, speed, volume and stealth. This automated tool allows attackers to corrupt underlying business logic, bleed money, and steal IP, ultimately damaging the target company's reputation and degrading website performance.
A bot for all reasons
Threat actors leverage bots to wield a variety of attack techniques, the most disruptive of which include spoofing, credential stuffing, and scraping.
speculation – Attackers release robots to invade digital shelves. By purchasing in-demand items such as event tickets and sneakers at a dizzying pace, they leave real customers standing, before marketing them for mass resale at inflated prices on secondary markets.
Credential filling – This technique exploits the web attack surface with malicious automation to launch volumetric identity attacks for fraudulent purposes. Attackers bombard interfaces with stolen or artificial credentials, ultimately gaining an illicit foothold in customer accounts or creating legions of fake identities to resell in dark areas of the Internet.
Scraped off – Threat actors scrape and wholesale extract unique content, pricing, and inventory data residing on the web attack surface. Since malicious automations collect IP for an average of 4 months before being detected, value is continually lost.
Bowing under the weight of enduring these huge automated volumetric attacks, websites slow down, racking up infrastructure costs and lost customers on top of what has already been stolen.
The impact of malicious automation is cumulative. A gradual and parasitic hemorrhage of financial, reputational and customer value that goes undetected by traditional controls.
In total, this typically costs businesses $85.6 million each year, dwarfing the average ransomware payout of $1.5 million.
Very real human impact
The impact on people is equally cumulative. Research has found that, at the mercy of the scarcity created by massive attacks of speculation, people are willing to pay 13% more for goods and services, even when they are afraid of being scammed.
The normalization of bots is also forcing some to adopt questionable behavior. More than a quarter of those under 35 admit to having rented one to secure the goods and services they want, despite knowing they are operating in questionable legal territory. A seemingly endless cycle of immoral behavior and fraud, reinforced by technology and facilitated by the distance of a keyboard.
A sophisticated solution for a sophisticated attack
The legalities of bots are confusing. Some, for example those that abuse stolen identities, are clearly illegal. Others operate in gray areas, for example they are covered by the website's terms and conditions, but are not against the law.
Generally speaking, official policy is still playing catch-up. Some regulations, such as the US Better Online Ticket Sales (BOTS) Act and even EU laws that attempt to mitigate and manage the harms of AI, address some concerns, but only provide partial coverage.
For targeted brands, mitigating the threat of malicious automation means overcoming a number of technical issues. First, bot attacks span the entire web surface, requiring visibility into the huge volumes of traffic passing through websites, APIs, and mobile applications. For large online brands, this makes it difficult to detect sophisticated robots that use an arsenal of disguises and impersonate real users. Because of this, legacy technologies fail at scale, either denying access to genuine customers or allowing bots to pass through unchecked.
Addressing the problem effectively requires strong regulation and technological innovation. Spurred by the growing harm to consumers, forward-thinking politicians and legislators have realized the magnitude of the impact and are beginning to clamp down on perpetrators. Likewise, new technologies capable of intelligently detecting robots in huge data sets using machine learning are starting to gain the trust of security teams.
However, what would force action is greater awareness of the magnitude of the problem. Bots are increasing exponentially in scale, speed and effectiveness; The question is: will we respond accordingly?
We have presented the best business VPN.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: