Let’s cut right to the chase: While Kubernetes is great for quickly deploying containers at cloud-scale, I think most people who use Kubernetes are missing out on the real value of the cloud, which is offloading responsibilities on parts of your application to managed cloud services. Kubernetes users quickly become “containerized” and have a hard time leveraging cloud services that aren’t within the cluster. This limits flexibility, increases cost, and ultimately ties the hands of developers who ideally should leverage the right tools to build the best applications.
I am old enough to have played with bulletin-board systems (BBS) in the 90s, and have seen the cloud evolve from the early days of S3 and AppEngine to the powerful computing platform which it is today. I have a few observations on how we got to the point where one of the most popular tools we use as an industry to build cloud systems (i.e. Kubernetes) essentially hurts the adoption of the cloud.
First, I think we need to consider where Kubernetes’ DNA comes from. This is a project born out of Google, which has a very different approach to building applications and services than Amazon. Amazon Web Services (AWS) has over 150 cloud services developers can choose from: S3 buckets, EC2 instances, container-based technologies, serverless, analytics, etc. Just like their retail business, it’s clear if you watch any AWS re:Invent keynote that Amazon prioritizes giving their users choice.
As an ex-Amazonian myself, I can tell you that each independent team within Amazon has the ability to build and deliver an application all the way to the customer. There is no standardized architecture, and there is no single person who understands how it all works across various teams, services, and applications.
Then comes Google with an entirely different internal architectural philosophy and engineering culture.
The platform everyone uses internally at Google is a container orchestration system called Borg, which Kubernetes is based on. It is a low level, beautiful platform that takes care of scheduling containers to be executed and scaled based on the amount of demand you need. A positive impact of this is that you don’t really need to deal with deployment; but in exchange, everything else that sits on top of Borg (i.e. your applications) relies on the platform. If you only build stuff within Google, “the cloud” is basically all these services that Borg orchestrates. They are all available at the tip of your fingers and you can create incredible things like Gmail and YouTube with them. This significantly undermines the Amazonian philosophy of choice.
It’s been amazing to see the growth of Kubernetes since it was open sourced almost a decade ago now. It’s been a competitive advantage and a way to eat up market share (think: Android), and the Cloud Native Computing Foundation (CNCF) + the Linux Foundation have certainly put a lot of marketing power behind Kubernetes.
But the fact remains it is too complex.
Especially if you are a startup – you need to be able to move fast and pivot. The friction is that in order to really take advantage of Kubernetes, you have to keep everything within the cluster. This dynamic creates a lot of tension where users often get the worst of both worlds. Everything outside of the cluster becomes a second-class citizen, so you end up having resources like buckets that are outside of the application, but that the application relies on. This adds a ton of complexity to manage at scale, as well as creates a lock-in effect to the cloud provider, undermining Kubernetes’s alleged portability.
But if you bring everything into the cluster, you aren’t able to take advantage of the service-based economy. Everything within the cluster is now your responsibility. So now if you want to use Redis and you deploy it externally, you don’t get the benefits of the Kubernetes provisioning engine and other shared capabilities. If you use it inside the cluster, then you need to manage the Redis database yourself, manage the replication, storage, version updates, backups, etc. Stuff you really really shouldn’t bother with. A lot of the value of the managed service is lost.
Certainly, there are some situations and applications where it makes sense to build entirely within the Kubernetes ecosystem. But I see too many companies commit to this path and way too early. The result is that they end up with a lot more complexity than is necessary.
Alternatively, other companies commit to a half-in approach in which they use both Kubernetes and cloud services. They end up not utilizing the service-based economy as much as they could because there is pressure to put as much as possible into the cluster. On the other hand, they don’t get the full benefit of using Kubernetes because they are not fully committing to it.
This is why what the community is building with Winglang is so important. Winglang is a new open-source programming language that unifies cloud infrastructure and application code into a single programming model that works across Kubernetes, serverless, AWS, Azure, GCP, and more. In Winglang, containers are first-class citizens but so are buckets and queues and all of the other cloud resources. De-coupling the application definition from the platform enables developers to build their applications in a way that is infrastructure agnostic and easily portable.
Kubernetes deserves a lot of the attention and hype it’s received since launching in 2015. But much of the pain and frustration comes from making the decision to use Kubernetes too early, and then ultimately getting locked into the Kubernetes ecosystem. It’s my view that developers should build their applications in a way that is independent of where the application will ultimately be deployed.
There will be plenty of times where Kubernetes is the right choice– but there will also be plenty of situations where it isn’t, or at least it isn’t from day one. And taking full advantage of the cloud means keeping that autonomy of choice.