- 96% of developers do not fully trust AI, 52% do not always check for errors
- Most ChatGPT and Perplexity users access AI through a personal account
- Contrary to trends, data exposure and vulnerabilities are major concerns
Sonar's latest State of Code developer survey found that nearly all (96%) developers say they do not fully trust that AI-generated code is functionally correct, despite its widespread use, and worse, many do not verify their AI-generated code correctly.
Currently, around 42% of developers' code is said to be generated by AI (a significant amount up from just 6% in 2023), but this is expected to rise to around 65% by 2027.
And yet, not even half (48%) of them always verify AI work before committing, highlighting the huge potential for bugs and vulnerabilities to be introduced.
Developers do not verify AI-generated code before using it
While three in five (59%) say they make a “moderate” or “substantial” effort to verify AI-generated code, two in five (38%) agree that it takes longer to verify than equivalent code written by humans. And, because the generated code uses a lot of data from the Internet to report its output, three in five (61%) agree that it often looks right, but isn't.
This study corroborates with separate recent research published by CodeRabbit, which reveals that AI creates 1.7 times more problems (and 1.7 times more major problems) than humans.
Current trends show that AI tools are used more in prototyping (88%) and Internet production software (83%), which may not seem as critical, but almost the same number of people use them for customer-facing applications (73%). GitHub Copilot (75%) and ChatGPT (74%) are by far the most used assistants.
But taking this information a step further, Sonar found that more than one in three (35%) developers use their own personal accounts rather than work-approved accounts, a figure that rises to 52% among ChatGPT users and 63% for Perplexity fans. This presents yet another risk to potentially confidential or sensitive business information.
In fact, despite what Sonar found true about AI use among developers, data exposure (57%), small vulnerabilities (47%), and serious vulnerabilities (44%) are among the biggest concerns.
“Generating code faster is only half the battle,” the report concludes. “The real value comes from being able to trust and verify that code efficiently.”
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.






