Two-thirds of security leaders consider banning AI-generated code


One of the most touted benefits of the proliferation of artificial intelligence is that it can help developers with menial tasks. However, new research shows that security leaders aren’t entirely on board with this idea: 63% of respondents are considering banning the use of AI in coding due to the risks involved.

An even larger proportion (92%) of decision-makers surveyed are concerned about the use of AI-generated code in their organization. Their main concerns relate to reduced quality of results.

AI models may have been trained on outdated open source libraries, and developers could quickly become overly reliant on using tools that make their lives easier, meaning poor code proliferates in the company's products.

SEE: Top security tools for developers

Furthermore, security officials believe that AI-generated code is unlikely to be as tightly controlled as handwritten lines. Developers may not feel as responsible for the output of an AI model and therefore may not feel as much pressure to ensure it is perfect.

Last week, TechRepublic spoke to Tariq Shaukat, CEO of code security firm Sonar, about how he's “hearing more and more” about companies that have used AI to write their code and are experiencing outages and security issues.

“This is typically due to insufficient reviews, either because the company has not implemented robust code review and quality practices or because developers review AI-written code less than they would their own code,” he said.

“When asked about buggy AI, the most common response is ‘it’s not my code,’ meaning they feel less responsible because they didn’t write it.”

The new report, “Organizations Struggle to Secure AI-Generated and Open Source Code,” from machine identity management provider Venafi, is based on a survey of 800 security decision-makers in the US, UK, Germany and France. The study found that 83% of organizations are currently using AI to develop code, and that it is standard practice in more than half of them, despite concerns from security professionals.

“New threats, such as AI poisoning and model escape, have begun to emerge, while developers and novices are using massive waves of generative AI code in ways that are yet to be understood,” Kevin Bocek, chief innovation officer at Venafi, said in the report.

While many have considered banning AI-assisted coding, 72% felt they have no choice but to allow the practice to continue so the company can remain competitive. According to Gartner, 90% of enterprise software engineers will be using AI-powered coding assistants by 2028 and will see productivity gains in the process.

SEE: 31% of organizations using generative AI ask you to write code (2023)

Security professionals are losing sleep over this problem

Two-thirds of respondents in the Venafi report say they find it impossible to keep up with ultra-productive developers when it comes to ensuring the security of their products, and 66% say they are unable to monitor the secure use of AI within the organization because they have no visibility into where it is being used.

As a result, security leaders are concerned about the consequences of missing potential vulnerabilities, with 59% unable to sleep over the issue. Nearly 80% believe the proliferation of AI-powered code will lead to a security reckoning, as a significant incident prompts a reform in how it is handled.

Bocek added in a press release: “Security teams are caught between a rock and a hard place in a new world where AI writes code. Developers are already super-powered by AI and are unwilling to give up their superpowers. And attackers are infiltrating our ranks – recent examples of long-term intrusion into open source projects and North Korean infiltration of IT are just the tip of the iceberg.”

scroll to top