Observe Inc. today added two artificial intelligence (AI) agents to its observability platform that enables DevOps teams to automate incident investigations, while giving developers the ability to generate code instrumentation, debug, and launch natural language queries to better understand how their application is running.
Designed primarily for site reliability engineers (SREs), the Observe AI SRE Agent autonomously applies context provided by the Observe platforms to pinpoint root causes and suggest fixes to help resolve issues faster.
The o11y.ai agent, meanwhile, enables developers to first automatically generate the OpenTelemetry code needed to instrument their code and then use natural language to ask questions about usage, errors and performance, as well as debug and validate fixes, using context and insights derived from their telemetry data and the way their code has been structured.
Both these agents leverage a graph and Model Context Protocol (MCP) server that Observe has embedded into its platform to make it simpler for AI agents to discover and query data. Originally developed by Anthropic, the MCP server makes it possible for developers using AI coding tools such as Claude Code, OpenAI Codex, Augment Code, Windsurf and n8n to access the telemetry data aggregated in the Observe platform.
Observe CEO Jeremy Burton said that the approach enables developers to access observability insights directly from within their preferred application development tool without having to learn how to navigate the graphical interface of an observability platform.
Collectively, these agents will help reduce the level of burnout that software engineering teams are encountering as they deploy more complex applications at higher levels of scale using AI coding tools, he added.
It’s not likely AI agents are going to replace the need for developers and DevOps engineers any time soon, but they do significantly reduce the overall amount of daily toil that software engineering teams encounter, noted Burton.
For example, organizations that have been given early access to those AI agents have seen a 10x improvement in triaging efforts that enables them to resolve issues in a few minutes that previously would have required hours, he added.

It’s not clear to what degree DevOps teams will rely on AI agents provided by platform providers such as Observe versus opting to build their own AI agents that automate workflows across multiple platforms. The one certain thing is that those teams will soon be orchestrating multiple AI agents embedded across highly distributed workflows. DevOps teams will need to spend some time determining how best to organize those AI agents across their DevOps workflows, noted Burton.
Hopefully, as telemetry data becomes more accessible, the overall quality of the applications being built and deployed will significantly increase. That’s becoming an especially critical issue in the age of AI because coding tools based on large language models (LLMs), while increasing developer productivity, are also generating more verbose code that is becoming both more challenging to debug and expensive to run.
Regardless of the number of AI agents that are ultimately deployed, the one certain thing is that at the heart of those workflows will be a human engineer who is uniquely able to make sense of it all.



