Qodo this week updated its code review platform based on large language models (LLMs) to add multiple artificial intelligence (AI) agents that are trained to handle specific tasks with much higher levels of precision.
Company CEO Itamar Friedman said the multi-agent approach provided in version 2.0 of the Qoda platform now makes it possible to take advantage of recall and memory to enable AI agents to understand and review code at a level that is comparable to a senior-level engineer.
For example, a Qodo Code Review Benchmark 1.0 evaluates AI reviews against 580 defects across 100 real pull requests (PRs) from production repositories. Qodo achieved an F1 of 60.1%, outperforming 7 other platforms, noted Friedman.
Available on DevOps platforms from GitHub, GitLab, and Bitbucket, Qodo is specifically designed to enable DevOps teams to identify issues in code quality, an issue that is becoming more problematic as the amount of flawed code generated by AI coding tools continues to increase.
In the absence of any ability to review that code using an AI tool, it won’t be too long before DevOps engineers who are responsible for code quality are simply overwhelmed, said Friedman. In effect, the DevOps bottleneck is now shifting to the code review process, he added.
On the plus side, the ability of code review tools such as Qodo continues to steadily improve to the point where it will soon be possible to accurately generate and review code, noted Friedman. In the meantime, however, a crucial need to augment DevOps engineers to ensure that only high-quality code is deployed has become a much more pressing concern, he added.
A recent survey conducted by The Futurum Group finds that a full 60% of respondents said their organization is now actively using AI to build and deploy software. The top areas of investment over the next 12 to 18 months are AI Copilot/AI code tools (38%), AI agent development (37%), AI-assisted testing (37%) followed closely by DevOps (37%), automated deployment (34%) and software security testing (31%), the survey finds.
The pace at which the DevOps team is going is very different from one organization to the next. However, there is little doubt by now that most have at least experimented with AI coding tools. The challenge is that the level of trust in the output of those tools is going to vary widely depending on the use case and the actual level of AI expertise being brought to bear by the developer or software engineer.
In the meantime, DevOps teams should assume that one day all code will be generated by AI coding tools, says Friedman. That day may not arrive as quickly as some of the current level of AI hype suggests, but it’s apparent that application developers will need to evolve into becoming more like a product manager rather than mainly writing code, he added.
Ultimately, however, no matter how fast code is written, it’s not going to make a meaningful impact on the organization if quality is sacrificed at the altar of speed. In fact, it’s probable that hastily developed code is actually going to wind up being more trouble than it’s worth.

