
Last quarter I helped a large enterprise size a GPU cluster for real-time LLM inference. We profiled the workload and found a glaring inefficiency: the H100s hit 92% utilization for about 200 milliseconds per request during prompt processing, then cratered to 30% for the next 3–9 seconds while generating output tokens. Those GPUs were expensive paperweights for the majority of every request.
There’s lots of chatter about the current GPU shortages. Customers can’t get chip allocations for months in advance. Hundreds of billions committed by hyperscalers to purchase available capacity. Many startups are waiting up to six months to receive an H100 node. It’s stated that the demand far exceeds the availability. While that’s certainly accurate; it doesn’t tell the entire story.
The other half of the story is that many teams utilizing LLM inference are wasting 60 to 80 % of the GPU compute resources they’ve purchased.
Here’s the part that stung. The prompt processing was saturating all of the compute cores on the GPU; however, that portion of the process was only taking approximately 200 milliseconds. The remaining portion of the request (token generation) that lasted anywhere from 3 seconds to 9 seconds only utilized maybe 30% of the same cores due to lack of memory bandwidth, not lack of compute power. We were purchasing our GPU-hours price based on peak resource utilization and achieving peak utilization for only 5% of the request life cycle.
Two workloads with completely opposing hardware requirements shared the same GPU and were placed within the same scheduling loop. If you provisioned a database server in this manner you would be long gone.
Imagine provisioning a database server sized for peak write throughput, then using it 90% of the time for reads. You’d split that into a write primary and read replicas without thinking twice. LLM inference has the same kind of split, and most teams haven’t noticed yet.
The solution is actually already included in the tools you’re probably using today. It’s called disaggregated inference: stop having one GPU perform two separate functions. Allocate prompt processing to hardware optimized for compute, allocate token generation to hardware optimized for memory bandwidth, and allow each pool to perform its function optimally. The concept itself has existed since read replicas. The implementation is new.
When I showed this data to the customer, their first question was: who’s already doing this differently? The answer surprised them. Perplexity runs disaggregated inference in production. So does Meta. LinkedIn. Mistral. The DistServe paper out of UC San Diego introduced the idea in 2024. By early 2026, NVIDIA built a full orchestration framework around it called Dynamo. vLLM, SGLang, and TensorRT-LLM all ship with native support. Red Hat and IBM Research open-sourced a Kubernetes-native version called llm-d. This isn’t a research concept waiting for adoption. It’s deployed at the companies serving more LLM traffic than anyone else.
The numbers are hard to argue with. The utilization rates for prompt-processing pools are typically 90-95%, simply because that is all they perform. The utilization rates for token-generating pools, batching hundreds of concurrent requests are well above 70% for bandwidth. Overall, GPU utilization increases significantly, to nearly double in fact. If you’re spending $2 million annually on inference GPUs, that’s $600K-800K back in your budget without servicing a single additional request.
So why are we not seeing more teams using disaggregated service?
I thought the same. The first item I noticed were the dashboards. Most inference monitoring tools will provide a single “GPU utilization” value, which combines both phases (prompt processing and generation) as an average. If your dashboard shows 55%, you might feel good about your resource usage. However, you do not see the fact that your resource utilization is 92% for 5% of wall time and 30% for the remaining 95%. This is a hidden issue within your tooling.
Then, there is the concept of muscle memory. Teams tend to use a standard playbook for LLM inference similar to what they used for web serving. They just add more replicas to handle increased load and then add a load balancer to distribute those requests. This approach is fine because each request tends to consume relatively equivalent resources. Unfortunately, horizontal scaling does not help resolve a vertical mismatch in resource consumption (one phase needs compute, the other needs memory bandwidth).
Disaggregating is not easy. In addition to managing two separate pools of resources, you need to implement a caching transfer protocol and design a routing layer that knows where every cached version exists. While the additional complexity may be negligible for smaller deployments (say, five or fewer GPUs), it will significantly hinder larger deployments. Larger deployments are exactly the ones complaining about their inability to obtain sufficient GPU allocations. Companies that are currently experiencing delays in receiving GPU allocations typically utilize dozens to hundreds of GPUs. At this scale, the wasted GPU resource utilization can be calculated in tens of millions of dollars annually.
The industry conversation about GPU scarcity focuses almost entirely on the supply side. Build more fabs. Design better chips. Negotiate bigger cloud contracts. Those things matter. But the demand side deserves equal scrutiny. If every team running monolithic LLM inference switched to disaggregated serving tomorrow, the effective GPU supply would roughly double overnight. No new silicon required.
That won’t happen tomorrow. But it is happening. The tooling is ready, the research is proven, and the companies that figured this out early are serving the same traffic on half the hardware. The remainder are continuing to provision based on write operations and run read operations.
