
The architectural patterns that powered the microservices revolution are showing their age. When my peers and I began designing distributed systems for companies like Cisco, Amazon, and Palo Alto Networks, we optimized for stateless, horizontally scalable services with predictable resource consumption.
AI workloads follow a different set of rules. Transformer architectures demand frequent synchronization of gradient updates across massive parameter spaces, and the global dependencies of attention mechanisms run counter to the sparse, hierarchical communication patterns that traditional distributed systems handle well.
This mismatch becomes especially visible in multicloud database environments: for example, organizations often adopt a multicloud database approach (e.g., two different databases in AWS and GCP), which works well for transactional workloads. But training a single transformer across both clouds forces every training step to synchronize gradients over high-latency interconnects.
GPU resource management exposes a fundamental mismatch between Kubernetes and AI infrastructure. Unlike CPUs, which can be overcommitted through time-sharing, GPUs bypass the Linux kernel entirely and don’t obey cgroups, namespaces, or the standard scheduler. Once a CUDA kernel launches, it runs to completion with no native preemption.
Workaround solutions are not native primitives
The industry has responded with solutions like NVIDIA’s Multi-Instance GPU and time-slicing, but these remain workarounds rather than native primitives. Kubernetes Dynamic Resource Allocation represents progress toward treating accelerators as first-class citizens, yet the scheduling model still assumes resources can be cleanly partitioned. When it comes to cloud providers, both GCP and AWS treat GPUs as exclusive resources at the VM or pod level. Most of the AI workloads, including fine-tuning, need only 20-30% GPUs, but schedulers can only give them 100% or 0%.
Both AWS and GCP provide unique pros, and they also possess different sets of challenges. While AWS provides tight integration with Nitro and networking, the EKS service inherits the same Kubernetes GPU limitations. Additionally, AWS MIG (Multi-Instance GPU) setup is manual, which poses configuration challenges. On the other hand, GCP does better GKE integration, and with TPUs, a different scheduling model helps avoid some GPU pain. But TPUs are not portable and very specialized, often getting into incompatibility challenges with custom CUDA workflows.
Systems that learn and adapt over time require architectural patterns that account for non-determinism. Traditional microservices assume that identical inputs produce identical outputs, enabling straightforward testing, debugging, and rollback strategies. AI components introduce statistical behavior that varies with training data, model versions, and inference conditions.
The current resource management model lacks the flexible requests-and-limits paradigm needed for workloads where the specific GPU type and generation can drastically impact performance. A training job optimized for an A100 may behave entirely differently on an H100, and workload placement must account for these differences in ways traditional orchestration never anticipated. From a database management system (DBMS) perspective, the data consistency in an AI-driven system no longer depends on the model’s output; instead it evolves toward data and metadata that produced the output. The modern AI systems focus mostly on making the inputs, model versions, and decision context strongly consistent while treating AI outout probabilistic that derives views rather than the canonical truth.
Mixed workloads create orchestration, coordination challenges
Organizations running mixed workloads face orchestration decisions that existing tools handle poorly. AI-driven systems can analyze historical patterns and anticipate resource needs before bottlenecks occur, but integrating predictive scaling with traditional reactive autoscaling creates coordination challenges. Projects like Kueue and Volcano offer gang-scheduling semantics where entire workloads are admitted or queued atomically, and cohort-based models allow quota borrowing across teams. These approaches acknowledge that distributed training jobs cannot be treated as collections of independent pods.
From the networking perspective, AI-system architectures must treat the network topology as a first-class design constraint rather than an implementation detail. Traditional microservices can tolerate best-effort routing and jitter, but AI systems often require low latency and heavy east-west bandwidth. Hence, the AI architectures need to consider putting the AI workloads in high-bandwidth clusters and should avoid cross-region and even cross-zone communication.
The path forward requires architects to reconsider assumptions baked into two decades of distributed systems design. Stateless services, independent scaling, and clean data partitioning served us well for web applications and API-driven architectures. AI workloads demand coordination patterns that feel more like high-performance computing than cloud-native microservices.
Those of us building the next generation of distributed systems must graft these capabilities onto existing infrastructure while organizations continue running production workloads. The monolith-to-microservices migration took a decade. The transition to AI-native architecture is already underway, and we have less time to get it right.
A traditional distributed system operates mostly within pre-defined deterministic trust boundaries. But the AI architectures aggregate information across various datasets, and the training jobs often require access to sensitive information and high-privileged infrastructures. This creates a completely new attack surface. Industries focus more on model vulnerabilities and prompt injections, but we often tend to ignore the systemic threats coming from these AI models accessing the high-privileged pipelines.
KubeCon + CloudNativeCon EU 2026 is coming to Amsterdam from March 23-26, bringing together cloud-native professionals, developers, and industry leaders for an exciting week of innovation, collaboration, and learning. Don’t miss your chance to be part of the premier conference for Kubernetes and cloud-native technologies. Secure your spot today by registering now! Learn more and register here.
