With the rapid advancement of technology, the scale and sophistication of cyberattacks are increasing. Anne Neuberger, US deputy national security adviser, highlighted this concern in 2023, referencing FBI and IMF data that predicts the annual cost of cybercrime in the US will exceed $23 trillion by 2027. This alarming projection underscores the urgent need for security systems to evolve at the pace of the weaponization of AI, which has significantly improved the complexity and effectiveness of scams.
Originally, phishing attacks were relatively simplistic: Scammers would pose as legitimate entities via email to trick people into revealing sensitive information, such as passwords and credit card numbers. These cybercriminals also trick victims into using malicious links or opening infected files, leading to automatic installation of malware on their devices.
However, the advent of enterprise SMS communications, QR codes, and advanced voice manipulation technologies has made phishing schemes increasingly difficult to detect and significantly expanded their potential for harm. This raises a critical question: How does integrating AI into these tactics amplify the chances of tricking people into divulging sensitive information or engaging with harmful links?
Founder and CEO of EasyDMARC.
Quishing: QR code phishing
Since the start of the COVID-19 pandemic, there has been a sharp increase in the prevalence of “quishing” scams, a phishing technique that exploits QR codes. This increase coincided with companies increasingly adopting QR code technology as a contactless alternative to physical documents. Once these QR codes are scanned, they can direct users to malicious websites designed to collect personal data, such as credit card information, or to automatically download malware to the scanner device.
In 2022, HP discovered a sophisticated deception scheme in which people received emails disguised as notifications from a package delivery service. These emails instructed recipients to make a payment using a QR code. HP's fourth quarter report from the same year revealed that these QR code-based phishing attacks were much more prevalent than previously thought. The findings highlighted a growing trend among fraudsters to use QR codes as a means to imitate legitimate businesses, with the aim of tricking people into handing over their personal information.
Vishing: voice phishing
As AI advances advance, the advent of voice generation and alteration technologies presents a significant and emerging threat. The Federal Trade Commission, recognizing this danger, issued an alert in 2023 warning against the deceptive potential of AI-generated voice clones in phone calls. This warning was not unfounded; One notable case in 2020 involved a financial worker in Hong Kong who was tricked into transferring £20 million to an offshore account by a fraudster using deepfake technology.
Vishing scams, or voice phishing, have an alarmingly high success rate. They exploit the element of surprise, forcing victims to make hasty decisions under pressure. This immediacy, inherent to voice calls, contrasts sharply with email-based scams, where recipients may pause to question the legitimacy of the request. Scammers can also personalize their approach and adapt in real time to the victim's responses and emotional state over the phone, an advantage that email scams lack. This personalized manipulation is much more difficult to replicate through text-based phishing attempts.
VALL-E, a notable example of such technology, can mimic a person's unique voice, emotions, and speech patterns with just a three-second sample of their voice. This capability through alternative models becomes especially potent in the hands of cybercriminals who target high-profile individuals, such as company CEOs, for whom there is abundant audiovisual material available online for AI training purposes. As deepfake technology advances and becomes more accessible, the boundary between reality and artificial fabrication becomes increasingly blurred, amplifying the potential for vishing attacks to deceive and manipulate with unprecedented effectiveness.
Smishing: SMS phishing
A 2023 study by NatWest revealed that 28% of UK residents noticed an increase in fraudulent activity compared to the previous year, with fraudulent SMS messages being the main form of phishing. While many of these SMS scams are easily identifiable as frauds, those that manage to evade detection can cause considerable damage.
One notable incident in 2022 involved a self-taught hacker posing as an IT professional to protect an Uber employee's password. This seemingly simple smishing attack paved the way for the hacker to gain extensive access to Uber's internal networks. While it is tempting to view these incidents as isolated cases, the evolution of generative AI is likely to allow even novice cybercriminals to execute more complex phishing operations with minimal technical knowledge.
Adding to the concern, the National Cybersecurity Center (NCSC) issued a warning earlier this year about the potential of generative AI to improve the credibility of scams. These AI tools are now being used to create fake “decoy documents” that are free of the usual signs of phishing attempts, such as translation, spelling or grammatical errors, thanks to the refinement capabilities of chatbots and AI platforms. accessible generative. This development highlights the growing challenge of distinguishing genuine communications from fraudulent ones, raising the stakes in the ongoing battle against cybercrime.
How can cybersecurity keep pace with generative AI?
In March, Microsoft released a report showing that 87% of UK organizations are now more susceptible to cyberattacks due to the increasing accessibility of artificial intelligence tools, and 39% of these entities are considered “high risk”. This alarming statistic highlights the urgent need for the UK to strengthen its cyber defense mechanisms.
The advent of AI has significantly changed the dynamics of cybersecurity, introducing sophisticated methods that cybercriminals can exploit, such as machine learning algorithms designed to discover and exploit software flaws for more targeted and powerful cyberattacks. However, Forrester research indicates that 90% of cyberattacks will continue to involve human interaction, suggesting that traditional methods such as phishing are still very effective. This underscores the importance of companies not only strengthening their defenses against basic tactics, but also staying up-to-date on how AI advances can alter these attack strategies.
To mitigate common attack vectors, businesses should prevent phishing attempts by preventing malicious emails from reaching users' inboxes in the first place. Implementing email authentication protocols such as SPF, DKIM, and DMARC can significantly reduce the chances of spoofed emails penetrating email defenses. However, despite these measures, the likelihood remains that a phishing attempt will end up breaking cybersecurity barriers, making it essential to cultivate a strong security culture within organizations. This involves educating employees not only to recognize threats, but also to respond appropriately to them.
The 2023 Verizon Data Breach Investigations Report notes that one in three employees could interact with phishing links and one in eight could reveal personal information when asked. This statistic is a stark reminder of the continued need to improve organizational cybersecurity practices and emphasizes the critical role employees play in the cybersecurity ecosystem. Leveraging technology to detect and neutralize phishing threats is essential, but it cannot operate in a vacuum. As AI poses increasing challenges, fostering an informed and proactive workforce becomes crucial to intercepting phishing attempts that bypass digital safeguards.
We have presented the best business VPN.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: