A survey of 450 IT professionals in the U.S. and Europe finds 69% of organizations have discovered vulnerabilities in code generated by artificial intelligence (AI) tools, with 20% reporting there has been a serious incident as a result.
Conducted by Sapio Research on behalf of Aikido Security, a provider of a platform of discovering vulnerabilities and protecting applications at runtime, the survey finds 92% of respondents are worried to some degree about vulnerabilities from AI-generated code, with 25% being seriously concerned.
Overall, the survey finds on average 24% of the code running in production environments has now been generated using AI tools. The survey, however, also makes it clear that the quality of code reviews being conducted remains suspect. Survey respondents report software engineers spend an average of 6.1 hours each week checking and triaging security tool alerts, with 72% of that time being wasted because of false positives. Nearly two-thirds of respondents (65%) said teams as a result bypass security checks, delay fixes or dismiss findings due to alert fatigue.
Aikido CISO Mike Wilkes said regardless of how code was written the pressure application development teams are under to deliver new features on time means that best DevSecOps practices will continue to be bypassed. The sad truth is that in the name of expediency many organizations are only paying lip service to best DevSecOps practices, he added.
It’s also unclear who is responsible for deploying flawed code, with more than half of respondents (53%) blaming security teams for not discovering vulnerabilities, compared to 45% that blame the developer and 42% who blame whoever merged the code.
On the plus side, 79% of respondents said their organization is relying more on AI to help fix vulnerabilities, the survey finds. Even so, an equal percentage also noted that it still takes longer than a single day to remediate critical vulnerabilities, with every organization having a backlog of issues that need to be addressed.
Despite these concerns, survey respondents remain optimistic about AI. A full 96% said they believe AI will write secure code eventually, but only 21% think that goal can be achieved without human oversight. A total of 90%, however, said they do expect AI to replace the need for humans to conduct penetration testing.
In general, AI is exposing software development flaws that have long existed, said Wilkes. The challenge is that at the current rate and scale applications are being built and deployed in the AI era, it’s now only a matter of time before these flaws manifest themselves in a way that creates major disruptions. The issue then becomes determining whether those disruptions warrant revisiting how software is constructed or simply viewing them as the cost of doing business in a multi-trillion dollar global economy that depends on flawed software, noted Wilkes.
Hopefully, the code being generated by AI coding tools is the worst it will ever be as additional advances are made. Until then, however, savvy DevOps teams would be wise to keep an eye on the amount of technical debt that is starting to pile up as application developers rely more on AI with each passing day.



