Originally developed by Anthropic, the Model Context Protocol (MCP) is rapidly becoming a de facto application programming interface (API) for enabling interoperability between artificial intelligence (AI) agents and other data sources.
In the financial services industry, we believe that agentic AI is the future of IT, and that is why we began developing our AI agents about two years ago.
By AI agents, we mean software that uses large language models (LLMs) to think and external tools to act. AI agents have been tremendously helpful not only for development activities such as unit test generation and code improvement but also for adding significant value to observability, infrastructure as code and document generation.
A few weeks before Anthropic announced MCP, it was not straightforward for these agents to connect to external or internal tools such as databases or APIs. For example, New Relic had to add the ability to observe specific LLMs in its observability use cases, but with MCP, it’s now able to extend its observability capability to AI agents that access these LLMs.
Before we get into the detail of how MCP benefits a site reliability engineer (SRE), let’s understand what MCP is.
What is MCP?
MCP is an open protocol that enables seamless integration between LLM-based applications or agents and external data sources and tools. Whether you’re building an AI-powered integrated development environment (IDE), enhancing a chat interface or creating custom AI workflows, MCP offers a standardized way to connect LLMs with the context they need. In other words, it standardizes how applications supply context to LLMs.
MCP provides a consistent framework for AI agents to:
- Share context with LLMs
- Expose capabilities in terms of APIs
- Expose tools such as databases
- Build workflows
Let’s look at how it works. Think of it like a web API but specifically designed for LLM interactions. MCP servers can:
- Expose data through resources (used to load information into the LLM’s context)
- Provide functionality through tools (used to execute code)
- Define interaction patterns through prompts (reusable templates for LLM interactions)
MCP Architecture
MCP follows a client–server architecture.

- MCP Hosts: Programs such as Claude Desktop, IDEs or AI tools that initiate the connection between the MCP client and MCP server.
- MCP Clients: Protocol clients that reside within the host and maintain one-on-one connections with servers. They use the context, tools and prompts provided by the server.
- MCP Servers: Lightweight programs that expose specific capabilities through the standardized MCP. They provide context, tools and prompts to clients.
MCP Servers for an SRE
When it comes to SREs, there are multiple MCP servers already built across the world which can be very useful. For example, the Azure MCP server allows AI agents to access Azure resources, the Docker MCP server enables agents to integrate with Docker and so on.
As an SRE, whether you are building remediation solutions or dashboards using AI and LLMs or developing systems to monitor Docker containers, you can leverage the respective MCP servers through the LLMs. For example, an MCP server can call Azure DevOps APIs behind the scenes to trigger remediation pipelines or a Docker MCP server can call Docker APIs to manage containers.
This way, the MCP server standardizes interactions between various LLMs and APIs. Suppose your team needs to interact with platform resources for their AI implementation. In that case, you can build your own MCP server to standardize such interactions — built on top of platform APIs and exposed for organizational use.
Developing an MCP Server
Building your own MCP server is quick and easy. You can quickly get started by building your MCP server using the Python MCP software development kit (SDK). Use the MCP inspector to test and debug your MCP server for faster validation.
In the example below, I am using GitHub Copilot with Visual Studio (VS) Code as my chat agent to pull Azure Insights data or call in Azure DevOps pipelines.

