Cycode today announced it is providing early access to a capability that identifies which artificial intelligence (AI) coding tools are being employed by application developers in addition to adding an AI Bill of Materials (AIBOM) that also identifies what underlying technologies, such as large language models (LLMs), are being invoked.
Devin Maguire, senior product marketing manager for Cycode, said these additions to its application security posture management (ASPM) platform will provide DevSecOps teams with the visibility needed to ensure only validated AI tools and technologies, including Model Context Protocol (MCP) servers, are being employed across the software development lifecycle (SDLC).
The overall goal is to reduce reliance on shadow AI coding tools that have not been vetted by DevOps engineering teams, added Maguire.

Based on a risk intelligence graph (RIG) developed by Cycode, the company’s ASPM platform now provides a comprehensive inventory of all AI and machine learning (ML) assets, automatically discovering when developers leverage AI coding assistants, connect a Model Context Protocol (MCP) server, or add AI models with every asset traced back to its source in a code repository. DevSecOps teams can then use the Cycode ASPM to define custom policies to govern AI adoption, he added.
The AIBOM, meanwhile, extends the scope of the software bill of materials (SBOM) capabilities that Cycode already provides, noted Maguire.
Cycode has previously added AI agents to its ASPM platform that, for example, have been trained to determine how exploitable a specific vulnerability found in an application actually is. In addition, Cycode has made available an AI Security Return on Investment (ROI) Calculator that analyzes the impact that using AI can have on specific DevSecOps use cases.
It’s not clear to what degree DevSecOps teams may be poised to crack down on the usage of rogue AI tools. After all, there is a fine line between encouraging application developers to be more productive and trying to ensure that the code generated by these tools will not only run in a production environment, but also not introduce any vulnerabilities. Most AI coding tools are relying on general-purpose LLMs that were trained to generate code using examples of varying quality pulled from across the Web. Each piece of code generated by AI coding tools, as a result, should be reviewed and validated before being incorporated into a production environment.
Ultimately, the AI coding genie is out of the proverbial bottle so there is no going back at this point. However, as the volume of code being generated continues to increase it’s already apparent that DevSecOps teams are being overwhelmed. The only effective response is going to be to rely more on AI agents to help review the code created by other AI agents.
In the meantime, savvy DevSecOps teams are already starting to more closely assess the code generation capabilities of various tools and platforms. The challenge, of course, is that given the current pace of AI innovation the capabilities of existing tools and platforms are not only evolving rapidly, but there are also new ones becoming instantly available with each passing week that application developers are inevitably going to play with to automatically generate code they not only didn’t write but may not even actually understand.




