Humans are fundamentally social creatures. And language is the basis of how we socialize and communicate. It is the basis of understanding and therefore of coexistence. Whether we know it or not, most of us speak two “languages”: the official and business language, and the dialect spoken in the region where we grew up. Hearing or reading the latter can be disarming; making us feel closer to the person who writes or speaks it.
The challenge with generative AI (GenAI) is that it gives threat actors with little understanding of such linguistic subtleties the ability to get inside our heads. It could further bolster its efforts to socially engineer victims and run compelling fraud and disinformation campaigns.
European Business Consultant at TrendMicro.
The language of cybercrime
Reading the dialect of our birthplace or childhood can have a strange psychological effect on many of us. Create a feeling of empathy with the person who writes it. Even when we see GenAI artificially generating it, it can have a similar impact.
However, unfortunately there are opportunities for threat actors here as well. Take phishing as an example. It remains one of the main threat vectors for cyberattacks and accounts for almost a quarter of all ransomware attacks in the fourth quarter of 2023. Fundamentally, it relies on social engineering: the scammer's ability to manipulate their victim to to carry out your orders. They could do this using official logos and sender domains. But language also plays a key role.
This is where GenAI could give opportunistic threat actors an advantage. Writing phishing missives in a dialect that the recipient instantly understands could increase trust levels and trick the victim into believing what they are told. Now this is unlikely to work in a business environment. But it could be used in scams targeting consumers. GenAI is already predicted to boost phishing by generating grammatically perfect content in multiple languages. Why not several dialects too?
The same logic could cause scammers to use GenAI to gain the trust of their victims in romances and other types of trust fraud. The use of dialects could play a key role in overcoming our increasingly skeptical attitude towards the people we meet online. It is a cybercrime that has already cost victims $734 million in 2022, according to the FBI. But the bad guys are always looking for innovative ways to increase their loot.
Building bombs and fake news
Another threat looms large this year: misinformation. Together, they were recently ranked by the World Economic Forum (WEF) as the number one global risk over the next two years. With around a quarter of the world's population heading to the polls in 2024, there is growing concern that nefarious actors will try to tilt the results towards their favorite candidates or undermine confidence in the entire democratic process. And while savvy Internet users are increasingly hesitant about the news they read online, dialect could once again be a trump card for threat actors.
First of all, it is not used much. That means we can pay more attention to content written in a specific dialect. We could read a social media post written in dialect, if only for the pleasure of being able to decipher what it means. If it's our own dialect, we might feel instantly closer to the person (or machine) who posted it. Politicians and cybersecurity experts can warn us about election interference by foreigners. But what could be less “foreign” than an account that posts in a local or regional dialect close to home?
Finally, consider how dialects can allow threat actors to jailbreak GenAI systems. Researchers at Brown University in the US used rarely spoken languages, such as Gaelic, to do exactly this with ChatGPT. The OpenAI chatbot has specific guardrails designed in, such as refusing to give the user instructions on how to build a bomb. However, when researchers asked ChatGPT in rare languages to do unethical things, they were able to access the prohibited information. According to media reports, Open AI is aware of the risk and is already taking steps to mitigate it. But we must remember that although GenAI seems “intelligent”, it can sometimes have the naivety of a four-year-old child.
It's time to educate
So what is the solution? Certainly, AI developers need to create better protections against abuse of GenAI's dialect generation capabilities. But users may also need to improve their understanding of potential threats and increase their skepticism about what they read and see online. Companies should include the dialect in their anti-fraud and anti-phishing training programs. And governments and industry bodies may want to run broader public awareness campaigns. As GenAI is increasingly used for malicious purposes, poor language skills may even become a sign of credibility in written communication over time.
That's not where we are now. But as cybersecurity professionals, we must recognize that it could be soon.
We have introduced the best AI writer.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: