Tag: Large Language Models
JFrog Adds Ability to Track Usage of AI Coding Tools
JFrog introduces AI-Generated Code Detection and Shadow AI Detection tools to identify AI-created code, track model usage, and enhance DevSecOps governance across software supply chains ...
MIT Researchers Propose a New Way to Build Software That Actually Makes Sense
MIT researchers propose a new framework to make software clearer and safer by organizing code into “concepts” and “synchronizations” for better visibility ...
Kong Adds MCP Support to Tool for Designing and Testing APIs
Kong Inc. has integrated Model Context Protocol (MCP) support into Insomnia 12, enabling DevOps teams to design, test, and secure MCP clients and servers for AI-driven APIs—reducing misconfigurations and enhancing compliance in ...
Sonar Previews Service to Improve Quality of AI Generated Code
Sonar’s SonarSweep improves AI-generated code by reducing bugs and vulnerabilities, helping organizations train more reliable AI models ...
AI-Generated Code Packages Can Lead to ‘Slopsquatting’ Threat
AI hallucinations – the occasional tendency of large language models to respond to prompts with incorrect, inaccurate or made-up answers – have been an ongoing concern as the enterprise adoption of generative ...
How to Extend an Application Security Program to AI/ML Applications
While various AI/ML application risks are like traditional application security risks and can be protected using the same tools and platforms, runtime security for the new models requires new methods of securing ...

