One of the key reasons for the increasing use of Kubernetes across organizations is that it radically simplifies and automates the deployment and management of highly complex microservices applications. Some organizations have reported using Kubernetes to manage dozens of clusters, with a correspondingly high number of nodes, applications, and containers running in those environments. Often these deployments run in hybrid cloud environments, adding the complexity of deploying and communicating across on-premise and multiple hosted clouds.
However, organizations in finance, healthcare, public sector agencies and other highly regulated industries have added security and compliance requirements. In these cases, there’s a need to balance the advantage of highly available, scalable and redundant Kubernetes cloud-based environments with additional infrastructure restrictions such as no public internet access or other high security standards.
We call these isolated or “air-gap” Kubernetes deployments, and in air-gap deployments the installation and maintenance of Kubernetes becomes increasingly complex. Isolated deployments require additional planning and implementation details to implement successfully.
Considerations for isolated deployments
Cloud-native technologies from hosted cloud storage and microservices to Docker and Kubernetes are radically changing the IT landscape. They offer the ability to automate infrastructure and application deployment by leveraging standardized cloud APIs, resources, software repositories and more.
But cloud-native does not mean cloud-bound. Increasingly, companies are seeking to take advantage of these cloud-native features in their own secure data centers. While challenging to implement, it’s not impossible. Building an isolated Kubernetes environment starts with infrastructure and dependency planning.
Running Kubernetes in isolation means having repositories in place for Kubernetes, Docker and all open-source packages you’ll need to build and deploy containers. Your environment will need its own private Docker registry, Linux packages mirror, Helm charts repository for the Kubernetes manifests, and a binary repository.
Once your cluster has been created, most deployments need additional software such as a service mesh like Istio, Grafana dashboards and CI/CD tools like Jenkins. It’s an additional challenge to configure all of your Docker images to pull from the internal registry and repositories, often implemented based on artifact storage platforms such as Nexus, Artifactory or Harbor.
As you’re setting up local mirrors of software, build dependency packages and other resources, you’ll need to scan every introduced open-source component for vulnerabilities, use DMZ bastion hosts for external connections, and more. Going forward, you’ll need procedures in place to track new releases, updates and patches, evaluate their impact on the environment, bring packages into the system securely, test, deploy and so on.
Are managed services a viable air-gap solution?
Given the overhead required to set up and manage an isolated Kubernetes environment, are there managed Kubernetes offerings that can meet these security requirements or regulations?
The short answer is: probably not. Cloud providers won’t give you access to master nodes and their configuration or logs. While some cloud services provide cluster-level isolation, you’re still deploying to someone else’s hardware in a physical and software environment you don’t control. Communication with your hosted environment in many cases would still traverse the public internet, providing an additional security vector to manage.
On top of that, you will have limited capability with a cloud provider to customize configuration of Kubernetes and other components to suit your needs. Managed cloud providers typically provide a predetermined set of services or software distributions. You’re dependent on their patch and upgrade schedule and your environment will be limited to the tools and integrations offered.
Maybe your deployment requirements are met by a cloud provider’s Kubernetes offering, but you still need the security of an air-gap deployment. Unfortunately, most providers don’t let you install their equivalent distribution in your own local network and bare metal servers, so you can’t take advantage of the provider’s pre-packaged Kubernetes environment for on-premise deployments.
The knowledge you’ll need for an on-premise deployment
A viable option for air-gap Kubernetes deployments is to develop a home-grown solution. This approach will require specific planning and expertise. It’s vital that you evaluate all the pros and cons of investing in your own custom, isolated Kubernetes solution. For certain businesses, it may be a requirement, so you should plan, budget and execute carefully to make sure you get the most out of your investment.
You can expect four major Kubernetes releases per year. As we mentioned earlier, you’ll have to keep up with these upstream Kubernetes releases and keep your isolated clusters up to date. In addition, the Kubernetes developers occasionally introduce breaking changes with these releases. All of your other software dependencies will likewise have releases, patches and upgrades that you’ll have to stay on top of.
Part of your ongoing planning and management will include analysis, testing and migration either to a newly created cluster or upgrading all Kubernetes components in place on a live cluster. Be honest about the potential risks of data corruption in the data store, possible downtime and the potential for development work related to upgrades.
When your Kubernetes cluster is deployed on-premises, it is much more difficult to install and maintain all its components and dependencies. First, if you have bare metal servers with no automation, auto-scaling your cluster properly will be difficult.
If your infrastructure is based on VMware vSphere, you’ll need to consider self-service features such as whether users can provision new virtual hosts themselves, how they’ll configure networking and storage and so on.
High availability is a key feature of successful Kubernetes deployments. If you’re managing your own hardware or VMs, how will you implement availability and scalability of clusters and their workloads? This isn’t just related to configuration of your software, infrastructure and applications. It also applies to issues such as rack awareness for VMs. You’ll even have to address concerns such as the need for redundant power supplies and datacenter cooling.
You’ll need strategies in place for storage management and backup and disaster recovery (aside from the aforementioned emergency power). Remember, your isolated Kubernetes environment can’t rely on cloud resources for redundant storage or backup services. Your storage management platform for application data and backups will have to be implemented and managed in-house.
We’ve already mentioned this, but from an operations perspective you must keep track of and implement the latest security fixes for upstream Linux distribution repositories, and make sure that patched or upgraded Docker and Linux kernel versions will be compatible with your Kubernetes version. You’ll need strategies in place for Kubernetes version upgrades along with all other external dependencies in your system.
This sounds like a lot to keep track of, but none of it is insurmountable. It does require a lot of resources—time, money, and deep Kubernetes in-house expertise. Understand your system requirements, plan ahead and execute carefully. Once the pieces are in place, tested and configured correctly, your isolated Kubernetes deployment can run just as effectively as in a public cloud environment.
Security in isolated environments
From the start, a key reason for isolated or air-gap Kubernetes deployments is application and data security. This means both external and internal security concerns have to be addressed at all levels of your planning and implementation.
An isolated physical environment, separate from the public internet, is only one aspect of security. You’ll have to implement protected communication between all clusters and application components in the system. All data will be encrypted in transit, in databases, and in warehousing storage. All users and applications will utilize enterprise-wide identity management for authentication and authorization.
A custom in-house Kubernetes implementation should take advantage of integrating single sign-on (SSO) with LDAP and Active Directory or another authentication service. Use role-based access control (RBAC), pod security policies and network policies in Kubernetes. On the hosts, employ Security-Enhanced Linux (SELinux) for access control security policies, ipsets and iptables for network whitelisting or blacklisting and admission controllers and admission webhooks for API security for Kubernetes itself.
Your security considerations will also include backup and recovery procedures, auditing, logging configuration and optimization, Prometheus federation setup and so on. There are many aspects of the system to secure.
If you don’t have the in-house experience to manage a project like this, consider contracting with a Kubernetes partner to take advantage of their expertise, guiding your infrastructure planning efforts in the right direction, making sure your solutions are optimal and stable and developing your team’s knowledge and skills.
While it can be challenging to implement Kubernetes in highly restrictive and isolated environments, it isn’t impossible. Considering the requirements and solution options carefully can help you start on the right path from the beginning and avoid common pitfalls.
The isolated nature of an air-gap deployment provides some measure of control, but don’t let that give you a false sense of security. Your SecOps team has a huge effort in store to configure manually on all Linux and Kubernetes security features, but it is worth the investment.