Modern observability is meant to give engineering teams a clear view into their systems.

That’s not actually happening.

Instead, many can only see fragments of what is happening inside their applications, yet they’re paying more than ever for that “privilege.”

It starts to make sense when you understand how much data is now created and processed each day. As digital systems have scaled, so has their telemetry. Gartner recently noted that observability spending is increasing sharply, driven by the explosion of metrics, logs, traces and events generated across increasingly complex architectures.

In response, many have tried to control costs with a seemingly benign tactic: sampling. Whether by keeping only telemetry snippets, turning off logging in non-production environments, or dropping data deemed unimportant, sampling has become a budgetary release valve across SaaS and tech in general.

But while understandably trying to contain costs, teams undercut observability’s core purpose. If you’re only collecting part of the data, you’re no longer observing the system. You’re guessing.

The Paradox of Partial Data

Sampling is typically considered “good enough.” How much could be missed in the unsampled 99%?

That question is rhetorical, of course, because the clear answer is, in practice, a tremendous amount.

The root cause of the most disruptive outage is rarely obvious. It’s usually buried inside some sort of anomaly, a single noisy trace, or a one-off misconfiguration. These clues often live in the pile of unsampled data, not because they’re significant, but because they’re atypical. Observability is a search for unexpected behavior.

I’ve seen multiple teams that rely on sampled data experience:

  • Blind spots where a critical transaction failed, but the trace was dropped.
  • Fragmented workflows with logs stored in one system, metrics in another, and tracing in yet another, all with inconsistent collection rules.
  • Inefficient debugging due to engineers chasing symptoms rather than causes.
  • Gaps in security because potential indicators of compromise were lost in discarded telemetry.

Any one of these issues illustrates why sampling due to cost isn’t worth the risks it creates.

How Did Observability Get to Where It Is?

The industry didn’t arrive at sampling because it’s a best practice. As usual, it’s all about dollars. We’re here because telemetry has outgrown traditional observability business models.

Many SaaS observability platforms price their services based on the volume of data they ingest. So, as telemetry grows, so do bills, and often unpredictably. Organizations are sometimes surprised by multi-million-dollar increases after starting with budgets as small as $10,000. In many enterprises, the bill becomes the main factor in determining what data is collected, where it’s collected and who has access to it.

All of this creates a conundrum: the more observability is needed, the more expensive it becomes, meaning the more organizations are incentivized to observe less.

Breaking the Cycle: Observability on Your Own Cloud

As organizations face rising costs across many different types of SaaS platforms, they’re adopting an approach that keeps data complete without ruining their budgets. Rather than sending all data to a vendor’s cloud, they are adopting the bring your own cloud (BYOC) models.

In a BYOC architecture, the control plane (software logic, UI, and management) is delivered by a vendor for simplicity. Meanwhile, the data plane (where telemetry is stored and processed) runs inside the customer’s own environment.

This changes the economics.

Instead of being tethered to SaaS premiums and unpredictable billing, costs align directly with the organization’s cloud capacity and pricing, which are resources it already manages. When it comes to observability, enterprises can retain all telemetry at full fidelity because they’re no longer charged for volume by a third party.

The Advantage of Full-Fidelity Observability

It’s important to understand that BYOC doesn’t eliminate the role of intelligent telemetry management. Adaptive sampling, filtering and baselining still matter; they just stop being financial decisions. Instead of sampling to avoid a bill, teams sample when the data is truly irrelevant. This makes it a product of engineering need, not vendor pricing pressure.

Implementing this mindset allows you to:

  • Maintain consistent visibility across dev, test, staging and production environments.
  • Correlate logs, traces, metrics and events at the source before context is lost.
  • Scale observability alongside business growth, not vendor cost structures.
  • Improve operational reliability and security posture.

Ultimately, observability can’t fulfill its intended promise when billing constraints compromise visibility. The increased reliance on sampling has uncovered a deeper truth: the traditional SaaS model has become misaligned with the goals of modern business and engineering teams.

BYOC helps you see everything that matters, correlate data across every layer, scale without the fear of cost spikes, and resolve issues using evidence rather than inference.

Best of all, eliminating sampling out of necessity also eliminates surprises.