Research Notes

Does Standardizing on OpenTelemetry Actually Solve the Data Fragmentation Problem?

Research Finder

Find by Keyword

Does Standardizing on OpenTelemetry Actually Solve the Data Fragmentation Problem?

Native log ingestion via the OpenTelemetry Collector aims to unify data streams and reduce reliance on proprietary agents for Splunk users.

3/25/2026

Key Highlights

  • Splunk now supports native log ingestion from the OpenTelemetry Collector via the OTLP protocol.

  • The update aims to eliminate the need for multiple agents by consolidating traces, metrics, and logs into a single pipeline.

  • Customers can use the collector for pre-processing and field extraction to improve downstream analytics efficiency.

  • This move signals a shift toward vendor-neutral architectures to mitigate the risks of proprietary lock-in.

The News

Splunk has introduced native log ingestion through its distribution of the OpenTelemetry Collector for the Splunk Platform. This update allows users to stream logs using the OTLP protocol across both on-premises and cloud environments. It marks a departure from previous limitations where such support was primarily confined to specific Kubernetes use cases. Find out more by clicking here to read the announcement blog.

Analyst Take

Although this may seem a nerdy announcement deep in the weeds, we see this move as a pragmatic response to the gravity of open standards in the observability market. For years, the Universal Forwarder was the undisputed king of the Splunk ecosystem; it was reliable but inherently proprietary. As IT environments have become more distributed, the overhead of managing disparate agents for different data types has become a significant tax on operations teams. By embracing the OpenTelemetry (OTel) Collector for logs, Splunk is acknowledging that the future of data ingestion is no longer a walled garden.

What Was Announced

The update centers on the Splunk Distribution of the OpenTelemetry Collector, which is now architected to support native log ingestion into the Splunk Platform via the OpenTelemetry Protocol (OTLP). This functionality is designed to provide a unified path for data, regardless of whether the underlying infrastructure is hosted in a private data center or a public cloud. Technically, the collector is built to handle the heavy lifting of data collection for traces, metrics, and logs simultaneously. It includes capabilities for metadata enrichment and field extraction at the edge. This means the collector is designed to pre-process log data, structuring it before it even reaches the indexer. Such an approach aims to deliver better performance for downstream applications, such as Splunk Enterprise Security, by ensuring data arrives in a more usable format. The release is currently in public preview, signaling a phased rollout as the company refines the integration.

We believe the shift toward OTLP is less about technical superiority and more about architectural freedom. When we look at how large enterprises manage their telemetry, the friction usually comes from the "agent tax." Every time a developer adds a new service, they have to worry about which agent is compatible with which backend. By standardizing on the OTLP collector, organizations can theoretically build a "collect once, send anywhere" pipeline. This reduces the cognitive load on DevOps teams. It is a sensible play and one we have been calling for.

However, we should be realistic about the transition. The Universal Forwarder is deeply embedded in the enterprise. It has decades of hardened security and reliability behind it. While the OTel Collector aims to deliver a more modern experience, it requires a shift in how teams think about data transformation. We see this as an opportunity for companies to clean up their "data swamps." Instead of dumping raw logs into a bucket, the collector allows for smarter filtering and masking at the source. This is not just about saving on license costs; it is about data quality.

The broader trend here is the commoditization of the data collection plane. Industry heavyweights are realizing (or at least some of them) that they can no longer compete on how data is collected. The real value has shifted to how that data is analyzed and acted upon. By supporting a vendor-neutral protocol, Splunk is trying to remove the barriers to entry for new data sources. If it is easier to get data into the platform, customers are more likely to keep using it as their primary "brain" for operations.

We also observe that this move helps Splunk keep pace with the shifting preferences of the developer community. Modern engineers gravitate toward open-source tools. They want to use the same configurations and languages across different projects. By further adopting OTel, Splunk is making itself more "developer friendly." It is also a defensive move against nimble competitors who built their entire stacks on OTel from day one. It is also an offensive move to ensure they remain the destination of choice for massive, multi-cloud log volumes.

Looking Ahead

Based on what we are observing, the convergence of logs, metrics, and traces into a single OTLP-based pipeline represents a fundamental re-architecture of the enterprise telemetry layer. The key trend we are going to be looking out for is the rate of "agent displacement" within the installed base. While the technical merits of a unified agent are clear, the operational inertia of replacing thousands of legacy forwarders cannot be understated. Our perspective is that we are entering an era of "hybrid ingestion," where OTel handles the cloud-native, ephemeral workloads while legacy agents maintain the heartbeat of the core systems.

Going forward, we are going to be closely monitoring how the company performs on the promise of "reduced vendor lock-in." While using an open collector makes it easier to switch backends, the proprietary value often remains trapped in the specific dashboards and alerting logic built atop the data. The announcement places Splunk in direct competition with the "OTel-native" upstarts and traditional rivals like Datadog, LogicMonitor, or New Relic, both of whom have been aggressive in their OTel adoption strategies.

HyperFRAME will be tracking how the company does in future quarters regarding the performance overhead of the OTel collector compared to the legacy Universal Forwarder. We anticipate that as organizations mature, the focus will shift from simple connectivity to sophisticated edge processing. This move is a necessary step, but the long-term winner will be the platform that makes OTel data the most actionable, not just the easiest to ingest.

Author Information

Steven Dickens | CEO HyperFRAME Research

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the CEO and Principal Analyst at HyperFRAME Research.
Ranked consistently among the Top 10 Analysts by AR Insights and a contributor to Forbes, Steven's expert perspectives are sought after by tier one media outlets such as The Wall Street Journal and CNBC, and he is a regular on TV networks including the Schwab Network and Bloomberg.