KubeCon + Cloud Native Con is happening this week in Salt Lake City, UT, bringing together the Kubernetes community in one location, and providing the opportunity for companies in the space to launch new offerings and update their products. 

We’ve collected the news announcements from those companies all in one place so you can stay up to date. Keep checking back here, as we will be updating this list as news comes in. 

Last updated: 11/13 at 9:45 AM ET

Mirantis Streamlines Kubernetes Operations, Delivers Security for Enterprise Workloads

Mirantis, providing organizations with total control over their strategic infrastructure using open-source software, today announced Mirantis Kubernetes Engine (MKE) 4, the latest evolution in its long-established product line that sets the standard for secure enterprise Kubernetes.

MKE powers some of the world’s largest, highest-performance, and most secure clusters – hosting mission-critical workloads in every industry. More than 300,000 nodes of MKE have been deployed in production.

MKE 4’s 100% open source architecture is based on k0s – a flexible and scalable CNCF-certified Kubernetes. Like its predecessors, MKE 4 delivers ‘enterprise ready’ Kubernetes that is high-performance, highly-secure (with FIPS 140-2 encryption) and easy to operate, with a convenient web UI (user interface).

The new MKE 4 provides flexibility for platform engineers – designed to precisely meet requirements of virtually any use-case with Mirantis-validated open source ‘composable’ components from the CNCF ecosystem. MKE 4 uses declarative lifecycle-management that is highly automated – with platform configurations that are continuously monitored and can be corrected by Kubernetes operators to prevent configuration drift.

Plus, installation of MKE Virtualization (KubeVirt) enables virtual machine (VM) workloads to run in tandem with those on containers.

“We believe that Kubernetes is the core platform for all technology infrastructure and have designed MKE 4 to provide users with the flexibility to compose the best platform for their needs,” said Shaun O’Meara, chief technology officer, Mirantis. “This agile approach to delivering Kubernetes clusters removes technical lock-in and supports a convergence of containers and virtual machines into a single platform.”

Key Features:

  • Optimizable and composable architecture: MKE 4 allows customers to optimize their stack with validated templates and a fully open-source platform, enabling them to swap in alternative components to enhance security, stability, and performance.
  • Converged platform for VMs and cloud native workloads: MKE 4 includes KubeVirt for integration of container and VM workloads, creating a unified developer experience and platform for cloud-native and virtualized applications.
  • Add new capabilities easily: Mirantis add-ons include easy-to-consume complete solutions for logging/monitoring/alerting, policy management, cost analytics and more.
  • Automated drift correction with MKE Operator: MKE 4 clusters are continually reconciled against their declarative configurations, preventing drift and risks from manual changes.
  • Automated updates with MKE Operator: MKE 4 clusters can be updated with the new mkectl client, or rolling updates can be automated with the built-in operator.

For existing MKE users, upgrading to MKE 4 requires just one command or a single click. Users of MKE 3.7 can easily transition while keeping all workloads running. For Swarm users, Mirantis will continue support in MKE 3.

MKE provides enterprises with a central point of collaboration for developers and operations to build, run, and scale cloud-native applications.

MKE 4 is scheduled to become available November 20 with packaged options for 24/7 enterprise support.

New Relic announces one-step Kubernetes observability 

New Relic announced one-step observability for Kubernetes to solve the challenges developers face when monitoring dynamic Kubernetes environments. New Relic automatically instruments APM with Kubernetes deployments, eliminating the need for additional configurations. It provides AI-powered insights and out-of-the box dashboards and views to manage Kubernetes workloads faster—ultimately speeding up incident resolution and improving developer productivity. 

Monitoring the performance of applications deployed on Kubernetes poses significant challenges. To gain observability across applications and Kubernetes clusters, developer and platform teams must continuously modify workloads and deploy agents, a cumbersome and time-consuming process that negatively impacts productivity. New Relic is bringing Intelligent Observability to developers so they can automatically instrument APM with Kubernetes while gaining AI-driven insights. The benefits of observability are clear—according to New Relic’s 2024 Observability Forecast, organizations with full-stack observability experience 79% less downtime per year, saving $42 million each year.

New Relic’s key offerings include:

  • Native Support for Prometheus & OpenTelemetry: New Relic provides native support for Prometheus and OTel-instrumented Kubernetes clusters, enabling rapid onboarding, automated correlation, and out-of-the-box insights in New Relic’s native UI.
  • Democratizing observability with New Relic AI: Users of any role or level of expertise can easily access insights and understand where action needs to be taken through natural language prompts.

New Relic One-Step Observability for Kubernetes is available to all customers. Get started for free here.

MinIO releases AIStor: Purpose-built for AI and data workloads

High performance object storage provider /MinIO today announced the release of AIStor, an evolution of its flagship Enterprise Object Store designed for the exascale data infrastructure challenges presented by modern AI workloads. AIStor provides new features, along with performance and scalability improvements, to enable enterprises to store all AI data in one infrastructure.

Recent research from MinIO underscores the importance of object storage in AI and ML workloads. Polling more than 700 IT leaders, MinIO found that the top three reasons motivating organizations to adopt object storage were to support AI initiatives and to deliver performance and scalability modeled after the public clouds. This has driven the company to build new AI-specific features, while also enhancing and refining existing functionality, specifically catered to the scale of AI workloads. 

“The launch of AIStor is an important milestone for MinIO. Our object store is the standard in the private cloud and the features we have built into AIStor reflect the needs of our most demanding and ambitious customers,” said AB Perisamy, co-founder and CEO at MinIO. “It is not enough to just protect and store data in the age of AI, storage companies like ours must facilitate an understanding of the data that resides on our software. AIStor is the realization of this vision and serves both our IT audience and our developer community.”

A key new feature is S3 API, promptObject. This API enables users to “talk” to unstructured objects in the same way one would engage an LLM moving the storage world from a PUT and GET paradigm to a PUT and PROMPT paradigm. Applications can use promptObject through function calling with additional logic. This can be combined with chained functions with multiple objects addressed at the same time. . For example, when querying a stored MRI scan, one can ask “where is the abnormality?” or “which region shows the most inflammation?” and promptObject will show it. This means that application developers can exponentially expand the capabilities of their applications without requiring domain-specific knowledge of RAG models or vector databases. This will dramatically simplify AI application development while simultaneously making it more powerful. 

Additional new and enhanced capabilities in MinIO AIStor include:

  • AIHub: a private Hugging Face API compatible repository for storing AI models and datasets directly in AIStor, enabling enterprises to create their own data and model repositories on the private cloud or in air-gapped environments without changing a single line of code. This eliminates the risk of developers leaking sensitive data sets or models.
  • Updated Global Console: a completely redesigned user interface for MinIO that provides extensive capabilities for Identity and Access Management (IAM), Information Lifecycle Management (ILM), load balancing, firewall, security, caching and orchestration, all through a single pane of glass. The updated console features a new MinIO Kubernetes operator that further simplifies the management of large scale data infrastructure where there are hundreds of servers and tens of thousands of drives.
  • Support for S3 over Remote Direct Memory Access (RDMA): enables customers to take full advantage of their high-speed (400GbE, 800GbE, and beyond) Ethernet investments for S3 object access by leveraging RDMA’s low-latency, high-throughput capabilities, and provides performance gains required to keep the compute layer fully utilized while reducing CPU utilization. 

For more on the research see the MinIO blog. To learn more about the AIStor, visit www.min.io or read more on the blog at blog.min.io/aistor

 Splunk expands observability portfolio 

Splunk today announced innovations across its expanded observability portfolio to empower organizations to build a leading observability practice. These product advancements provide ITOps and engineering teams with more options to unify visibility across their entire IT environment to drive faster detection and investigation, harness control over data and costs and improve their digital resilience.

Splunk’s observability portfolio, supercharged by Splunk AppDynamics, provides organizations with deeper business context, broader coverage across both three-tier and microservices environments, for unified visibility across any environment and any stack. Key innovations include:

  • Enhancements to the unified observability experience: Customers now benefit from improved integration of Splunk AppDynamics SaaS within the Splunk observability portfolio, enabling a more frictionless experience and actionable insights for faster troubleshooting across their entire environment. Key features include:
    • General availability of Single Sign-On (SSO) and Deep Linking between Splunk Cloud and Splunk AppDynamics to improve operational efficiency and streamline troubleshooting workflows.
    • General availability of an enhanced User Interface to deliver a more consistent look and feel for the user, and a more cohesive troubleshooting experience across Splunk AppDynamics and Splunk Observability Cloud.
    • General availability of the Log Observer Connect for Splunk AppDynamics: Announced in preview at Cisco Live! Las Vegas, this integration drives faster troubleshooting across on- premises and hybrid environments by enabling customers to traverse from dashboards and visualizations in Splunk AppDynamics to relevant logs in the Splunk Platform with a single click.

Splunk continues to innovate to reduce toil in the problem identification and resolution process so users can get to value faster and reduce their mean time to detect (MTTD) and mean time to resolve (MTTR).

AI enhancements to Splunk Observability Cloud include:

  • Updates to Tag Spotlight capability provide a more intuitive understanding of common problems across applications and end-user experience, facilitating faster troubleshooting and improving incident resolution.
  • Infrastructure Monitoring with Kubernetes Proactive Troubleshooting: This new feature provide ITOps and engineering teams with improved drill-down experiences, simplified navigation and new list views for the Kubernetes navigator, to speed up MTTR and maintain optimal performance across their Kubernetes environments.
Threat Model and Independent Verifier Audit Examine the Security of eBPF

The eBPF Foundation, which drives the technical vision and direction of eBPF across the open source ecosystem in an independent forum, has announced the release of an eBPF Security Threat Model produced by ControlPlane, as well as an eBPF Verifier Code Audit produced by NCC Group.

Security Threat Model

Conducted by ControlPlane under sponsorship of the eBPF Foundation, the Security Threat Model examined security guidance for deploying eBPF, and how to mitigate potential threats and vulnerabilities. Generally, the research found that eBPF is a highly secure technology thanks to built-in security features, including a verifier that ensures the safety of eBPF programs.

The threat modeling approach was structured around:

  1. What are we building? This involves understanding what eBPF is, and how eBPF programs work.
  2. What can go wrong? Following the definition of a simple, high-level scenario in the Threat Model Scope, developing attack trees to explore how an attacker could utilize eBPF for nefarious purposes.
  3. What can we do about the things that can go wrong? Once a list of threats has been established, inherent eBPF controls and end-user recommendations are mapped against them.
  4. Are we doing a good job? Finally, the threat model’s outcomes are reviewed to provide practical guidance for eBPF adopters.

To address the threats identified, the report authors made several recommendations:

  1. Least Privilege Principle: Grant eBPF programs only the necessary permissions.
  2. Supply Chain Security: Ensure the integrity of eBPF tools and libraries.
  3. Regular Updates: Keep the kernel and eBPF tools up-to-date with the latest security patches.
  4. Monitoring and Logging: Implement robust monitoring and logging to detect and respond to security incidents.
  5. Threat Modeling: Conduct regular threat modeling exercises to identify potential vulnerabilities and risks.
  6. Disabling Unprivileged eBPF: Unprivileged eBPF should be disabled by default to reduce the attack surface.

Download the full eBPF Security Threat Model.

Verifier Code Audit

The eBPF Foundation engaged NCC Group to conduct a security source code review of the eBPF Verifier. The review included:

  • Identification of the properties the eBPF Verifier is trying to prove.
  • Source code review of the main logic of the eBPF verifier, as (typically) invoked via the do_check() function in kernel/bpf/verifier.c.
  • Any issue that could allow eBPF source code to bypass the constraints of the Verifier to compromise the correct operation of the eBPF Verifier, leading to standard confidentiality, integrity, and availability concerns

Overall, the code review found that the eBPF community has been highly effective in identifying bugs, and efficient in fixing them. The report also points out that while the eBPF Verifier is an important tool in ensuring security of eBPF deployments, it is not the only one, as eBPF is “designed to use the Linux privilege model to control access to eBPF, which mitigates the impact of security issues within the verifier.”

The assessment uncovered several code flaws. The most notable finding was a vulnerability enabling a privileged attacker to read and write arbitrary kernel memory (find_equal_scalars).

This vulnerability has been addressed by the community. The report also made additional recommendations for improving security of the Verifier such as refactoring complex functions and adding details about what the Verifier enforces to documentation.

Download the full eBPF Verifier Code Audit.

env0 Expands Cloud Compass with Azure Support, Enhances Cloud Asset Management

env0, a leading Infrastructure as Code (IaC) automation band management platform, announced today that Cloud Compass, its AI-powered cloud asset management solution, will expand to include support for Microsoft Azure.

Cloud Compass is part of the env0 platform, empowering organizations to enhance their cloud
governance and mitigate risks, enabling organizations to:

  • Track IaC Coverage: Using proprietary AI-assisted logic, Cloud Compass audits cloud infrastructure, identifying, itemizing, and categorizing resources to indicate which are managed via IaC and which rely on manual operations or Cloud API.
  • Auto-Assess Risk: Cloud Compass continuously monitors activity, assigning each resource a severity score to help teams prioritize items for IaC import and mitigate potential security, compliance, and reliability risks.
  • Streamline Resource Importing: Leveraging GenAI, Cloud Compass can also generate custom import code for each asset, streamlining the codification process. When used alongside the env0 platform, this not only saves time but also ensures compliance with security and governance policies.

“Addressing the risks and unintended costs associated with unmanaged cloud resources is essential for secure and efficient cloud operations,” said Yuval Nelinger, Director of Product at env0. “In that respect, Cloud Compass is a game changer that tackles these challenges head-on, enabling teams to automatically track IaC coverage and close critical gaps by moving resources into IaC management to enhance governance. Expanding these capabilities to Azure was a popular demand from our customers, and it will help more organizations to use Cloud Compass to mitigate risks and optimize costs, not just in Azure but across multi-cloud environments.”

In addition to Azure support, the env0 team is actively working on more upgrades to its Cloud Compass feature. In the near term, the roadmap includes adding GCP support and integrating with existing env0 drift detection capabilities. This integration will leverage information from Cloud Compass to provide additional contextual details, helping teams reduce MTTR and improve visibility into the origins of changes across cloud environments.

Tigera Enhances Calico with Major Network and Runtime Security Updates

Tigera, the creator of Project Calico, the most adopted technology for container networking and security, today announced several new features that significantly advance Calico’s network security and runtime security capabilities. Tigera will debut the latest updates to Calico Cloud, Calico Enterprise, and Calico Open Source during KubeCon North America at Booth #H7.

With the rise in Artificial Intelligence (AI) applications, and the infrastructure trend of migrating from virtual machines (VMs) to Kubernetes, network security has become critical. Tigera’s new updates to Calico extend its network security and visibility capabilities to VMs and hosts, and provide several new enhancements for implementing network security.

The new release of Calico also includes essential capabilities for security teams. Today, there is a critical need to simplify security monitoring. Security operations teams are overwhelmed with the number of security events and false positives, and need solutions that help them become more efficient and effective in their roles. Tigera has enhanced Calico’s runtime security capabilities, including fine-tuning the detectors to eliminate noise and make the detection more targeted.

Network Security Enhancements

  • Policy Tiers and Support for AdminNetwork and BaselineNetwork Policies – Calico now supports new Kubernetes policies and Calico policy tiers that provide granular control over policy precedence, ensuring predictable, consistent enforcement and enabling better collaboration between teams. 
  • Extend Calico Network Security Beyond Kubernetes to VMs and Hosts – Calico can protect VMs and hosts running outside of a Kubernetes cluster, significantly expanding the scope of how users can leverage Calico to secure application workloads.
  • Native Support for nftables – Calico introduces native support for nftables, ensuring that Kubernetes users can smoothly transition from iptables to nftables while maintaining performance and compatibility.
  • New Sidecar Deployment for Envoy in Calico Ensures greater levels of compatibility with certain Kubernetes platforms such as GKE, AKS, EKS and Wireguard. 

Runtime Security Enhancements

  • Fine-Tuned Runtime Threat Detection for Accuracy and Efficiency – Calico allows administrators to select which types of detectors to enable in their cluster, enabling teams to phase their deployment and tune and customize threat detection.
  • Significant Reduction of False Positives – Calico enables operators to bypass threat detection for certain known processes, thereby eliminating false positives. 
  • Bolstered Network-Based Threat Detection – Calico supports the ability to customize SNORT rules for Deep Packet Inspection (DPI) on a workload basis to improve accuracy. 
  • Insight into the Exploitability of Vulnerabilities to Prioritize Remediation – Calico introduces new meta data including Exploit Prediction Scoring System (EPSS) and information on known exploits to estimate the likelihood that the software vulnerability will be exploited in the wild.

“We are pleased to extend Calico’s renowned network security beyond Kubernetes clusters to virtual machines and hosts,” said Amit Gupta, Chief Product Officer, Tigera. “Organizations can now use a single pane of glass to visualize and manage network security across their Kubernetes and non-Kubernetes environments. All network security features, including egress access controls and microsegmentation, will work in the same way they do in Kubernetes clusters. These updates further our mission to equip users with robust, comprehensive networking and security solutions to meet their modern business needs.”

With these new updates, Calico provides platform and security engineers with more control, visibility, and efficiency in securing and managing their Kubernetes and hybrid environments. Calico’s latest enhancements offer both flexibility for development teams and strict controls for platform and security teams. Learn more about Calico’s new capabilities here

Pulumi Brings Streamlined Management and Security to Kubernetes Ecosystem

Pulumi announced significant enhancements to the Kubernetes ecosystem. Key improvements include a new Kubernetes-native deployment agent for enhanced security and scalability, major updates to the EKS provider supporting Amazon Linux 2023 and Security Groups for pods, Pulumi ESC integration with External Secrets Operator, and the release of Pulumi Kubernetes Operator 2.0 with dedicated workspace pods. 

These updates, alongside improvements to Helm Chart resources, enhanced await logic, and better CustomResource support through crd2pulumi, strengthens Pulumi’s commitment to providing developers with robust, enterprise-grade tools for managing Kubernetes infrastructure.

Managing infrastructure has become progressively more time consuming and complicated, especially with legacy tools that weren’t designed to handle hundreds or thousands of Kubernetes resources spread across multiple clusters. Teams often struggle with large, complex YAML configurations and domain-specific languages (DSLs) that are restrictive and fail to scale effectively.

Pulumi Infrastructure as Code (IaC) offers a modern solution to these challenges. Instead of specialized languages, teams can program both their cloud infrastructure and Kubernetes resources using familiar, general-purpose programming languages, enhanced by generative AI capabilities. For instance, setting up managed Kubernetes services like Amazon EKS can be accomplished with just a single line of code.

Red Hat announces OpenShift AI 2.15 to speed up innovation

Red Hat today announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud. 

Red Hat OpenShift AI 2.15 is designed to provide greater flexibility, tuning and tracking capabilities, helping to accelerate enterprises’ AI/ML innovation and operational consistency with greater regularity and a stronger security posture at scale across public clouds, datacenters and edge environments.

According to IDC, enterprises included in the Forbes Global 2000 will allocate over 40% of their core IT spend on AI initiatives.

The advanced features delivered with the latest version of Red Hat OpenShift AI include:

  • Model registry, currently provided as technology preview, is the central place to view and manage registered models. The option to provide multiple model registries is also available. Red Hat has also donated the model registry project to the Kubeflow community as a subproject.
  • Data drift detection monitors changes in input data distributions for deployed ML models, which helps keep the model aligned with real-world data and maintain the accuracy of its predictions over time.
  • Bias detection tools help data scientists and AI engineers monitor whether their models are fair and unbiased, a crucial part of establishing model trust. These tools are incorporated from the TrustyAI open source community, which provides a diverse toolkit for responsible AI development and deployment.
  • Efficient fine-tuning with LoRA uses low-rank adapters (LoRA) to enable more efficient fine-tuning of LLMs, such as Llama 3. By optimizing model training and fine-tuning within cloud native environments, this solution enhances both performance and flexibility, making AI deployment more accessible and scalable.
  • Support for NVIDIA NIM, a set of easy-to-use interface microservices that accelerate the delivery of gen AI applications. 
  • Support for AMD GPUs enables access to an AMD ROCm workbench image for using AMD GPUs for model development. 

As a comprehensive AI/ML platform, Red Hat OpenShift AI 2.15 also adds new gen AI capabilities around model serving., including vLLM serving runtime for KServe. 

Observe introduces AI-powered K8s solution to detect issues quickly

Observability platform provider Observe, Inc. today launched Kubernetes Explorer, designed to simplify visualizing and troubleshooting for cloud-native environments. Kubernetes Explorer enables DevOps teams, site reliability engineers (SREs) and software engineers to easily understand disparate Kubernetes components, detect issues quickly, uncover root causes and resolve them faster than ever before.

According to the 2024 Gartner Critical Capabilities for Container Management report, “by 2027, more than 75% of all AI deployments will use container technology as the underlying compute environment, up from less than 50% today.” As Kubernetes adoption continues to grow, driven by AI and edge computing trends, the complexity of observing distributed applications and infrastructure has increased. Observe addresses this challenge by unifying fragmented data across metrics, traces, and logs, providing insights that span applications, the Kubernetes platform, and cloud-native infrastructure.

Observe’s AI Investigator tightly integrates with Kubernetes Explorer to create custom, incident-specific visualizations and suggestions, providing on-call engineers with an expert Kubernetes assistant while troubleshooting. Observe launched its new AI Investigator – based on an agentic AI approach – last month as part of its most significant product update to date, along with $145 million in Series B funding.

Additional Kubernetes Explorer features include:

  • Kubernetes Hindsight: Provides historical visibility so teams can do retrospective analysis and performance optimization in ephemeral container environments.
  • Cluster Optimization: Offers a visual map of workload distribution across the Kubernetes cluster, enabling quick identification of underutilized capacity and optimization of resources. This capability is crucial as the latest CNCF cloud-native FinOps survey found half of organizations overspend on Kubernetes infrastructure, primarily due to over-provisioning.
  • Resource Descriptors: Delivers comprehensive visibility into full YAML configurations of Kubernetes resources, maintaining deployment descriptor history for easy version comparison. 

For more information about Kubernetes Explorer, visit www.observeinc.com.

CAST AI launches Container Live Migration solution and AI Enabler

CAST AI today announced the launch of its commercially supported Container Live Migration feature. This innovation enables uninterrupted migration of stateful and uninterruptible workloads in Kubernetes, such as databases and AI/ML jobs, ensuring continuous uptime and operational efficiency while reducing infrastructure costs.

Organizations running resource-intensive, stateful applications cannot afford downtime. Since there is no widely adopted, commercial solution to move these sensitive workloads to cost-efficient resources, they end up running in underutilized and expensive nodes. Container Live Migration addresses this challenge head-on by enabling these previously unmovable workloads to be automatically packed into fewer optimized nodes. This helps eliminate resource fragmentation, ensuring maximum resource utilization and optimal instance selection while driving substantial cost savings.

To learn more about the Container Live Migration feature, sign up for a free trial or book a live demo session.

Also today, CAST AI announced the launch of AI Enabler, an optimization tool that streamlines the deployment of LLMs and reduces operational expenses. AI Enabler leverages CAST AI’s Kubernetes infrastructure optimization capabilities to intelligently route queries to the most optimal and cost-effective LLMs, whether they’re open-source or commercial. 

“With the increasing availability of LLMs, choosing the right one for your use case, and doing so cost-effectively, has become a real challenge,” said Laurent Gil, co-founder and CPO at CAST AI. “AI Enabler removes that complexity by automatically routing queries to the most efficient models and providing detailed cost insights, helping businesses fully leverage AI at a fraction of the cost. This automated approach allows organizations to scale generative AI solutions across their operations without sacrificing cost efficiency.”

New Container Security Tool Tells DevOps and Platform Engineers if They’re Protected Against Escapes

Edera, the world’s only secure-by-design Kubernetes and AI solution, announced the availability of Am I Isolated, an open source container security benchmark that probes users runtime environments and tests for container isolation.

The Rust-based container runtime scanner runs as a container and detects gaps in users’ container runtime isolation. It also provides guidance to improve users’ runtime environments to offer stronger isolation guarantees.

“The threat of container escapes is resulting in millions in lost revenue for enterprises. Companies are either spending unnecessary dollars running separate Kubernetes environments for untrusted containers or they’re using too many expensive and antiquated tools that don’t solve anything,” said Emily Long, co-founder and CEO at Edera. “It’s time to change the way containers are run and secured and that means solving for escapes. Visibility into your level of vulnerability is the first step. We’re excited to bring this tool to our customers and the community at large.”

Containers are just processes on a host, so isolation is critical to workload and multi-tenancy security because it limits the blast radius of container escapes and security incidents. Am I Isolated also probes for ambient privileges and common misconfigurations made by DevOps teams and platform engineers when setting up their containerized applications or container runtime environments. It provides ongoing testing against container escape techniques.

While Kubernetes turned 10 years-old earlier this year, running secure multi-tenancy workloads remains an unsolved problem that’s costing companies millions of dollars. Edera introduces a diverse set of technologies with a diverse team of experts to solve what has been the decade’s defining enterprise security challenge.

Edera uses a type 1 hypervisor to offer isolation at the container level for the first time, enabling companies to realize the original promise of Kubernetes and to move quickly to run GPUs for emerging AI workloads. Instead of running containers in Linux namespaces, Edera’s platform treats a container like a virtual machine guest. There is no shared kernel state between containers, and a memory-safe Rust control plane further secures workloads. Edera can be used anywhere users run their containers (public cloud, private cloud and on-premise) and doesn’t require virtualization extensions or custom infrastructure. It’s simple, delivers peace of mind and saves companies millions in cloud costs.

Am I Isolated is free and open source and can be downloaded on Edera’s GitHub.

SUSE simplifies observability with new platform for Rancher users

SUSE today announced the early access of SUSE Cloud Observability, a cloud native, fully managed observability platform designed specifically for Rancher-managed Kubernetes clusters. SUSE Cloud Observability offers an all-in-one observability tool tailored for the Rancher community, eliminating the need for separate tools.

SUSE Cloud Observability delivers multi-cloud visibility with dependency maps to visualize clusters across multiple clouds. Enterprises can use the platform to monitor mission-critical workloads in Rancher-managed Kubernetes clusters across AWS, Azure and Google Cloud, quickly detecting and resolving issues in real time.

Benefits of SUSE Cloud Observability include:

  • Comprehensive Insights: Deep insights into Kubernetes environments powered by OpenTelemetry, and more than 40 out-of-the-box dashboards that offer a holistic view of the entire stack.
  • Rapid Setup and Deployment: Deploy quickly with a SaaS observability solution and out-of-the-box pre-configured policies.
  • Cost-Effective Entry Point: Start small and scale as needed with transparent, usage-based pricing and no hidden costs while still enjoying a fully managed SaaS solution.
Akamai launches cloud-agnostic, ready-to-run application platform

Akamai Technologies, Inc. today announced Akamai Application Platform, a ready-to-run solution that makes it easy to deploy, manage, and scale highly distributed applications.

Akamai Application Platform is built on top of the cloud native Kubernetes technology Otomi, which Akamai acquired from Red Kubes earlier this year. 

Akamai Application Platform provides developers with:

  • Ready to run, customizable templates that provide essential capabilities for running cloud native applications on a Kubernetes cluster
  • A framework that seamlessly integrates pre-configured upstream open source projects while simplifying their deployment, management and scaling for production workloads
  • A self-service environment for engineering teams to build, deploy, secure, and maintain their applications
  • A catalog of golden path templates based on industry best practices and open-source tools across the cloud native ecosystem
  • Akamai Application Platform offers a fully integrated suite of tools for observability, security, compliance, secrets management, CI/CD and service mesh and cloud native storage.