Harness today revealed it has developed a model context protocol (MCP) server through which it will be able to provide more context to artificial intelligence (AI) tools, agents and platforms that are being used to build software.
Rohan Gupta, a principal product manager for continuous delivery and AI DevOps for Harness, said exposing the data residing in the Harness continuous integration/continuous delivery (CI/CD) platform will streamline workflows by eliminating the need to rely on custom plug-ins and application programming interfaces (APIs) to provide additional context about the application development environments where code is ultimately deployed.
Originally developed by Anthropic, MCP is based on a JSON-RPC 2.0 API framework and is rapidly becoming a de facto API standard for integrating AI tools, agents and platforms. The overall goal is to make it easier to expose relevant data to any AI agent that has permission to access it. The overall goal is to make it simpler for AI coding tools to more easily understand the reasons why, for example, a specific DevOps pipeline might have failed, said Gupta.
DevOps teams will also be able to employ the role-based access controls (RBAC) provided by Harness to limit access to deployment logs only. All the API keys are managed through Harness and no data is ever sent to a large language model (LLM).
It’s not clear how widely AI tools and platforms have been adopted but a Futurum Group survey finds 41% of respondents expect AI tools and platforms will be used to generate, review and test code, while 39% plan to make use of AI models based on machine learning algorithms.
Ultimately, DevOps engineers will soon find themselves supervising a small army of AI agents with the help of what are becoming known as Planner agents, said Gupta. It remains to be seen how multiple AI agents will interact with one another to complete an end-to-end task but as orchestration frameworks continue to evolve there will be higher levels of interoperability between agents that both Harness and its third-party partners are continuing to build, said Gupta.
In the meantime, DevOps teams might want to start creating an inventory of manual tasks that might soon be assigned to an AI agent. As the ability of these AI agents to reliably complete these tasks improves, DevOps engineers should be able to spend more of their time on activities that provide higher value to the business, he noted.
Arguably, the next major challenge will be the cultural changes that will inevitably occur as more DevOps tasks are automated using AI agents that, for all intents and purposes, are now part of a DevOps team, added Gupta.
Regardless of approach to building software, the expectation is that the data that AI agents need to write better code will be readily made available. Each organization will need to decide to what degree to trust AI technologies to build software themselves, but ultimately, as the pace of software innovation starts to accelerate in the age of AI, it may soon not be feasible to effectively compete without it.