Datadog is expanding the scope of its DevOps portfolio following a pair of acquisitions that add feature flagging and data observability capabilities to its portfolio of services.
At the same time, the company also launched a pair of open source projects through which Datadog is providing access to an open-weights AI model, dubbed Toto, specifically trained using time-series data in a way that makes it possible to instantly detect anomalies and capacity planning issues and BOOM, a time-series benchmark that provides access to 350 million observations across 2,807 real-world multivariate methods to capture the scale, sparsity, spikes and cold-start issues that are unique to telemetry data collected from production environments. .
Michael Whetten, vice president of product at Datadog, said that, in general, the provider of a widely used monitoring platform is now moving to expand the reach and scope of its offerings.
For example, the company is adding feature flagging and experimentation to its Product Analysis suite of tools following the acquisition of Eppo. That capability will make it possible to track code changes using feature flags that enable DevOps teams to create branches within applications, which makes it simpler to test or limit access to new capabilities in applications.
Additionally, data science teams and product managers will be able to design and measure impact with experiments, while business analysts can feature flags to understand overall usage patterns.
That move comes on the heels of a separate deal that involved the acquisition of Metaplane, a provider of a data observability platform that makes use of machine learning algorithms to prevent, detect and resolve data quality issues. That capability is now especially critical for any organization that is building or customizing artificial intelligence (AI) models, noted Whetten.
While software engineers have long used feature flags, also known as toggles, to test services, they are now also increasingly being used in production environments to limit access to specific tiers of capabilities and services that are made available under different licensing terms.
Data observability, meanwhile, has become a more pressing issue as the volume of data that applications need to access continues to increase exponentially.
As application environments become more complex, IT teams will increasingly be looking to standardize on platforms that provide a wider range of capabilities that go well beyond monitoring a set of pre-defined metrics, said Whetten.
The degree to which IT organizations will need to consume those services will, of course, vary widely. However, one certain thing is that application environments have become too complex to observe and manage without being able to access more advanced forms of analytics. The sheer volume of telemetry data being generated by modern applications is already overwhelming. The challenge now is to provide IT teams with an integrated set of services that enable them to identify and remediate issues long before they can have a material impact on application environments, noted Whetten.
There is, of course, already no shortage of options when it comes to observability platforms for analyzing telemetry data. The issue is making sure the return on investment in those platforms can be justified in a way that business leaders can easily appreciate.