
Advancements in artificial intelligence in 2025 marked a seismic shift in how organizations can use the tools to automate IT processes, predict network interruptions and identify and remediate issues that could lead to poor performance, or worse, cybersecurity breaches.
This article includes the thoughts of industry leaders as to what we might expect moving into 2026.
Bart Willemsen, Rene Buest and Mark Horvath, analysts at Gartner
The rise of ‘confidential computing’
Confidential computing changes how organizations handle sensitive data. By isolating workloads inside hardware-based trusted execution environments (TEE), it keeps content and workloads private even from infrastructure owners, cloud providers, or anyone with physical access to the hardware. This is especially valuable for regulated industries and global operations facing geopolitical and compliance risks, but also for cross-competitor collaboration. Major cloud providers now offer confidential computing options, and industry adoption is accelerating as AI, analytics, and compliance needs grow rapidly. Hardware-based security offers a root of trust that is difficult to compromise, supporting secure cloud adoption and enabling trusted collaboration on analytics and AI projects. Remote and device attestation adds another layer of assurance. Integrating TEEs across different chip types, cloud providers, and environments can be complex. Most solutions are best suited for IaaS and custom workloads, with (currently) limited support for SaaS and PaaS. Specialized skills or third-party platforms may be needed to orchestrate and manage confidential computing as adoption grows.
Naren Narendran, chief science and engineering officer at Aerospike
Self-optimizing infrastructure becomes the path to higher ROAI
In 2026, data infrastructure will become autonomous, continuously optimizing its storage, caching, and access paths to support agentic AI systems in real time. In addition to uptime, latency and data freshness will also become universally accepted metrics of reliability. Successful organizations will recognize the data layer as a living system that adapts to changing workloads, applies policies in real time, and helps the agentic AI agents process the related information. These capabilities will strengthen the return on AI investment (ROAI) by improving the efficiency and consistency of every AI-driven workflow. Organizations will gain greater visibility into how the performance of their data infrastructure impacts business-related outcomes, which will foster more confidence in evaluating AI investments.
Eric Tschetter, chief architect at Imply
The rise of decoupled observability stacks
In 2026, the era of the all-in-one observability black box will be over. AI is driving massive growth in logs, metrics, and traces, pushing tightly coupled observability platforms past their limits. Organizations are reaching a breaking point: they can no longer scale these monolithic systems without sacrificing data visibility or having to absorb runaway costs. The cost and complexity of scaling current observability stacks will become unsustainable. Forward-thinking teams are already starting to rethink architecture, pulling apart the data layer from the tools that sit on top of it. We’ve seen this movie play out before – business intelligence went through the same evolution over the last 40 years. It started as tightly coupled stacks in the 80s and exists today as a decoupled architecture that gives teams flexibility, choice, and control. The separation gave rise to the Snowflakes, Databricks, Fivetrans, and Tableaus of the world. Observability is next. The observability warehouse (i.e., specialized data stores for logs, metrics and traces) will emerge as the new standard, serving as a central data layer that reduces dependence on any one monolithic platform, freeing teams from vendor lock-in and letting them choose the best tools for the job.
Ajay Patel, general manager of Apptio and IT Automation at IBM
FinOps to become a core business capability
As AI capabilities develop further, FinOps will simultaneously go under a noticeable evolution: one that shifts the practice from dashboards and report-driven specialty into an automated, AI-powered capability that delivers real-time intelligence directly to engineers, product teams and business leaders. AI will provide predictive insights, surface optimization opportunities, take well-grounded actions, forecast cost impacts, and embed financial context into everyday decisions. With this forecast in mind, FinOps will transform from a specialized practice focus into a core business capability and evolve from a “back-of-house” function into a foundational discipline that empowers teams to become stewards of smarter, data-informed decisions.
Lou Flynn, market strategist for Applied AI, Open Source Software & ModelOPS at SAS
Time to mop up AI slop
Remember when the log4J breach rocked the open source community? In 2026, mature, early AI adopters that bypassed attempts to measure and incorporate AI responsibly will be exposed. The result will be a massive loss of credibility as their use of commoditized AI slop is surfaced to the masses.
Jans Aasman, CEO of Franz, Inc.
Policy-as-Knowledge
In the coming year, enterprises will stop treating compliance as a post-check and start embedding it as machine reasoning. The rise of Policy-as-Knowledge—encoding HIPAA, SOX, CMS, and AML rules as machine-checkable logic—will transform how AI agents act within regulated workflows. Instead of relying on human reviews or ad hoc filters, autonomous systems will reason over encoded policies before executing an action, much like compilers enforce syntax before code runs. Neuro-symbolic AI will make this possible by linking language models to ontologies, rules, and provenance graphs, enabling real-time validation of whether a decision complies with institutional or legal frameworks.
Nick Zeigler, VP of digital innovation at All Covered
Cloud repatriation accelerates as enterprises prioritize resilience
After watching hyperscaler outages take down critical infrastructure repeatedly, enterprises will realize that modern private cloud solutions now deliver superior uptime at predictable costs, with many offering 99.99%+ SLAs as standard versus the 99.95% you get from hyperscalers. The migrations back to private cloud will accelerate as companies discover they can achieve better availability, predictable monthly costs, and actual accountability from their providers, without playing Russian roulette every time a hyperscaler has a bad day.
Bennie Grant, COO of Percona
The open source community continues the fight against restrictive relicensing
It’s unclear if or when another open source company will change its license, but what’s become abundantly clear is how the community will react. Every time a company attempts to impose restrictions, developers and enterprises respond with innovation and collective action. Moving forward, the community will continue to create alternatives, influence licensing decisions, and ensure that openness and freedom remain the defining principles of the ecosystem. Transparency isn’t just a standard; it’s the bedrock of open source.
Kumar Mehta, founder and CDO of Versa Networks
Security for AI models
A “WAF for LLMs” becomes a real runtime layer – Instead of treating GenAI risk as just user behavior, enterprises will deploy an AI-specific runtime defense layer in front of public-facing and internal GenAI models to semantically inspect prompts and responses. The goal is to block prompt injection, data leakage, model abuse, and other AI-native attack patterns — in real time, at scale. Critically, SASE today largely protects the user/agent-to-model path. To truly secure GenAI, security must also sit in the application-to-model data path — delivered through scalable reverse proxies, SDKs, or API gateway integrations — so identity, DLP, and traffic controls apply directly to AI calls and responses.
Andrew Hillier, CTO and co-founder of Densify
GPU Optimization, Efficiency and Yield
We’re progressing toward a shift in the GPU adoption lifecycle. Companies have deployed hundreds of thousands of GPUs to get AI services running, and will soon face the scrutiny that comes with large cost line items and questioning aspects such as utilization and yield. The focus will begin to be infused with not just ‘do we have enough GPUs’ to ‘are we using them properly.’ 2026 will be defined by intelligent GPU usage. Companies will stop assuming every AI workload needs a GPU and start asking the right question: What’s the minimum viable model and infrastructure to get the job done? Sometimes that’s a better infrastructure choice or better placement of workloads on what they already have.
