IT leaders fear that cybersecurity costs driven by AI shoot


IT leaders are concerned about rocket costs of cyber security tools, which are flooded with AI characteristics. Meanwhile, computer pirates greatly avoid AI, since there are relatively few discussions on how they could use it in cyber crime forums.

In a survey of 400 IT decision makers by the Safos security firm, 80% believe that the generative AI will significantly increase the cost of safety tools. This tracks with a separate investigation of Gartner that predicts that global technological expenditure increases by almost 10% this year, largely due to IA infrastructure improvements.

Sofos's research found that 99% of organizations include AI capacities in the list of requirements for cyber security platforms, with the most common reason to improve protection. However, only 20% of respondents cited this as their main reason, which indicates a lack of consensus on the need for the tools in security.

The three quarters of the leaders said that measuring the additional cost of the characteristics of AI in their security tools is challenging. For example, Microsoft controversially increased the price of Office 365 by 45% this month due to the inclusion of co -pilot.

On the other hand, 87% of respondents believe that the efficiency savings related to AI will exceed the additional cost, which can explain why 65% ​​have already adopted safety solutions with AI. The launch of the low -cost AI model Depseek R1 has generated hope that the price of the tools of the soon decreases in all areas.

SEE: Hackerone: 48% of security professionals believe that AI is risky

But the cost is not the only outstanding concern for Sofos researchers. A significant 84% of security leaders worry that the high expectations for Ai Tools capacities will press them to reduce their team's staff. An even greater proportion, 89%, are concerned that failures in the capabilities of the tools can work against them and introduce security threats.

“The low quality and poorly implemented AI models can inadvertently introduce a considerable risk of their own cybersecurity, and the” garbage adage, garbage “is particularly relevant to AI,” said Sophos researchers.

Cybercriminals do not use AI as much as you can think

Security concerns can be dissuaded to cybercriminals to adopt as expected, according to separate investigations of sophos. Despite analysts' predictions, researchers discovered that AI is not yet widely used in cyber attacks. To evaluate the prevalence of the use of AI within the piracy community, Sofos examined publications in underground forums.

The researchers identified less than 150 publications on GPT or large language models in the last year. For the scale, they found more than 1,000 publications in cryptocurrencies and more than 600 threads related to the purchase and sale of network accesses.

“Most threat actors in the forums of cyber crime that we investigate still do not seem remarkably excited about generative AI, and we do not find evidence that cybercriminals use it to develop new exploits or malware,” Sophos's investigators wrote .

A crime site in Russian has had an area of ​​AI dedicated since 2019, but only has 300 threads compared to more than 700 and 1,700 threads in the malware and network access sections, respectively. However, the researchers pointed out that this could be considered “a relatively rapid growth for an issue that has only become widely known in the last two years.”

However, in a publication, a user admitted to having spoken with a GPT for social reasons to combat loneliness instead of organizing a cyber attack. Another user replied that it is “bad for your OPSEC [operational security]”, Further highlighting the lack of community confidence in technology.

Computer pirates are using AI to send spam, collect intelligence and social engineering

The publications and threads that mention the AI ​​apply it to techniques such as spam, compilation of open source intelligence and social engineering; The latter includes the use of GPT to generate phishing emails and spam texts.

The VIPRE security firm detected a 20% increase in commercial email commitment attacks in the second quarter of 2024 compared to the same period in 2023; AI was responsible for two fifths of those BEC attacks.

Other publications focus on “Jailbreaking”, where models are instructed to avoid safeguards with a carefully built warning. Malicious chatbots, specifically designed for cybercrime, have been frequent since 2023. Although models like WormGPT have been in use, the newest are still emerging as Ghostgpt.

Only a few attempts of “primitive and low quality” to generate malware, attack tools and exploits with AI were detected by the investigation of Sofos in the forums. Such a thing is not unknown; In June, HP intercepted an email campaign that diffuses malware in nature with a script that “it was very likely that it has been written with Genai's help.”

The talk about the code generated by AI tended to be accompanied with sarcasm or criticism. For example, in a publication that supposedly contains handwritten code, a user replied: “Is it written with chatgpt or something … This code will not work”. Sophos researchers said the general consensus is that the use of AI to create malware was for “lazy and/or little qualified people looking for shortcuts.”

Interestingly, some publications mentioned create malware enabled for AI as an aspirational, indicating that, once the technology is available, they would like to use it in attacks. A publication entitled “The first autonomous C2 of the world included the admission that” this remains only a product of my imagination for now. “

“Some users are also using AI to automate routine tasks,” the researchers wrote. “But consensus seems that most do not trust him at all more complex.”

scroll to top