In a new report released this week, software giant Microsoft claims that American rivals such as Iran, Russia and North Korea are preparing to ramp up its cyberwarfare efforts using modern generative AI. The problem is exacerbated, he adds, by the chronic shortage of qualified cybersecurity personnel. The report cites a 2023 ISC2 Cybersecurity Workforce Study that says that approximately 4 million additional support staff It will be necessary to face the coming attack. Microsoft's own studies in 2023 highlighted a huge increase in password attacks in two years from 579 per second to more than 4000 per second.
The company's response has been the launch of CoPilot For Security. This artificial intelligence tool is designed to track, identify and block these threats, but more quickly and effectively than humans. For example, a recent test showed that using generative AI helped security analysts, regardless of experience level, operate 44% more accurately and 26% faster when addressing all types of threats. Eighty-six percent also said AI made them more productive and reduced the effort needed to complete their tasks.
Unfortunately, as the company acknowledges, the use of AI is not limited to the good guys. The explosive rise of technology is sparking an arms race, as threat actors seek to leverage new tools to cause the most damage possible. Hence the publication of this report on threats to warn against the next escalation. The report confirms that OpenAI and Microsoft are partnering to detect and address these bad actors and their tactics as they emerge in force.
The impact that generative AI has had on cyberattacks is widespread. In 2023, Darktrace researchers discovered that there was a 135% increase in so-called “new cyberattacks” based on email from January to February 2023, which coincided with the widespread adoption of ChatGPT. Additionally, an increase in phishing attacks that were linguistically complex and used a greater number of words, longer sentences, and more punctuation was discovered. All of this led to a 52% increase in email account takeover attempts, in which attackers impersonated the IT team of victim organizations.
The report outlines three main areas that are likely to consume increasing amounts of AI in the near future. Improved target and weakness recognition, improved malware coding using sophisticated AI encoders, and help with learning and planning. The enormous computing resources required inevitably mean that the first to adopt the technology will almost certainly be nation states.
Several of these cyber threat entities are specifically mentioned. Strontium (or APT28) is a very active cyberespionage group that has been operating from Russia for twenty years. It has several labels and is expected to dramatically increase the use of advanced artificial intelligence tools as they become available.
North Korea also has a huge cyber espionage presence. Some reports say that more than 7,000 people have been running continuous threat programs against the West for decades, with activity increasing by 300% since 2017. One such group is Operation Velvet Chollima or Emerald Sleet, which primarily targets operations academics and NGOs. Here, AI is increasingly used to improve phishing campaigns and test for vulnerabilities.
The report highlights two other major players in the global cyberwar space: Iran and China. These two countries have also increased their use of language learning models (LLM), primarily to investigate opportunities and gain insight into potential areas of future attacks. In addition to these geopolitical attacks, Microsoft's report describes increased use of AI in more conventional criminal activities, such as ransomware, fraud (especially through the use of voice cloning), email phishing, and general identity manipulation.
As the war intensifies, we can expect Microsoft and partners like OpenAI to develop an increasingly sophisticated set of tools to provide threat detection, behavioral analysis, and other methods to detect attacks quickly and decisively.
The report concludes: “Microsoft anticipates that AI will evolve social engineering tactics, creating more sophisticated attacks including deepfakes and voice cloning…prevention is key to combating all cyber threats, whether traditional or AI-enabled.”