The UK's National Cyber Security Center has published a new study that finds that generative AI can increase the risks of cyber threats such as ransomware.
Overall, the report found that generative AI will provide a “capability enhancement” to existing threats rather than being a source of entirely new threats. Threat actors will need to be sophisticated enough to gain access to “quality training data, significant expertise (both AI and cyber), and resources” before they can take advantage of generative AI, which the NCSC says is not likely to occur until 2025. Threat actors “will be able to analyze exfiltrated data more quickly and effectively, and use it to train AI models” in the future.
How generative AI can “improve” attacks
“We must ensure we harness AI technology for its vast potential and manage its risks, including its implications on the cyber threat,” NCSC executive director Lindy Cameron wrote in a news release. “The emerging use of AI in cyberattacks is evolutionary, not revolutionary, meaning it improves existing threats like ransomware, but does not transform the risk landscape in the near term.”
The report classified the threats (Figure A) by the “improvement” potential of generative AI and by the types of threat actors: nation-state-sponsored, well-organized and less-skilled attackers, or opportunistic.
Figure A
The generative AI threat that will extend to 2025 comes from the “evolution and improvement of existing tactics, techniques and procedures,” not new ones, according to the report.
AI services lower barrier to entry for ransomware attackers
Ransomware is expected to remain a dominant form of cybercrime, according to the report. Similar to how attackers offer ransomware as a service, they are now also offering generative AI as a service, according to the report.
SEE: Recent malware botnet captures cloud credentials from AWS, Microsoft Azure and more (TechRepublic)
“Artificial intelligence services reduce barriers to entry, increase the number of cybercriminals and will increase their capability by improving the scale, speed and effectiveness of existing attack methods,” said James Babbage, the agency's chief threat officer. National Against Crime, cited. in the NCSC news release about the study.
Ransomware actors are already using generative AI for reconnaissance, phishing and encryption, a trend the NCSC expects to continue “through 2025 and beyond.”
Social engineering can be facilitated by AI
The survey found that social engineering will see a breakthrough thanks to generative AI in the next two years. For example, generative AI will be able to remove spelling and grammatical errors that often mark spam messages. After all, generative AI can create new content for attackers and defenders.
Phishing and malware attackers could use AI, but only the most sophisticated ones are likely to have it
Similarly, threat actors can use generative AI to gain access to account or password information in the course of a phishing attack. However, it will be necessary for advanced threat actors to use generative AI for malware, according to the report. Creating malware that can evade current security filters would require training generative AI with large amounts of high-quality exploit data. The only groups likely to have access to that data today are state actors, but the report says there is a “realistic possibility” that such repositories exist.
Vulnerabilities may appear at a faster rate due to AI
Network administrators looking to patch vulnerabilities before they are exploited may find their job becoming more difficult as generative AI speeds up the time between identifying and exploiting vulnerabilities.
How advocates can use generative AI
The NCSC noted that some of the benefits that generative AI provides to cyber attackers can also benefit defenders. Generative AI can help find patterns to speed up the time it takes to detect or classify attacks and identify malicious emails or phishing campaigns.
To improve global defenses against attackers using generative AI, the United Kingdom organized the creation of the Bletchley Declaration in November 2023 as a guideline to address future risks from AI.
The NCSC and some UK private industry organizations have embraced AI to improve threat detection and security by design under the £2.6 billion ($3.3 billion) Cyber Security Strategy. announced in 2022.