It’s been clear for the past several years that containerisation is the new norm for enterprise IT infrastructure. A recent survey Red Hat conducted with CCS Insight found that 91% of EMEA developers believe it’s a high priority that they move apps and workloads to containers. However, while managing the lifecycle of one container is easy, when it comes to managing thousands or even millions of containers, teams need to deploy a container orchestration system. By far, the most common solution for orchestration is Kubernetes, used by up to 83% of organisations that run containers.
However, when talking about Kubernetes at a high level, we often oversimplify it by referring to it as a system that enables containers, and this isn’t the full picture. Rather, Kubernetes is a solution that addresses some central challenges regarding resource management and orchestration between containers.
While this makes Kubernetes a necessary tool for running containers in your organisation, it’s not sufficient to operate a container environment along. Alongside Kubernetes, a container platform requires a container registry, container engine, storage capabilities, networking infrastructure, runtimes, logging and monitoring, and many other pieces of infrastructure. This all needs to sit atop an underlying operating system, alongside other infrastructure and processes – essential if a team wants to achieve continuous integration/continuous delivery (CI/CD) to enable, accelerate, and enforce DevOps practices.
So, when it comes to “deploying Kubernetes”, what we usually mean is deploying a solution that includes all of the above – that is, you’re committing to much more than just running Kubernetes alone. There’s two ways to go about it: either pick a free or commercial distribution that has handled this legwork, or build it all up from scratch and do it yourself. Generally, I find the latter to be inadvisable.
Learn from the hype surrounding kale – don’t rush Kubernetes
Bas Lemmens, vice-president and general manager at VMware Tanzu, explores how the hype around Kubernetes can be likened to that surrounding the superfood kale, and what lessons enterprises can learn. Read here
Going distro is more efficient
When you pick an established and recognised Kubernetes “distribution”, you’re effectively picking a solution that comes pre-packaged with some or all of the pieces needed to orchestrate and manage containers at scale. In a Kubernetes distribution (“distro”), somebody else has taken the time to develop, test, and integrate all the necessary components of your container infrastructure.
This will free up your teams to focus on the value-add of their job: developing and delivering applications. With the confidence that it’ll work from the outset, and that it will allow your team to focus on performing their day to day roles, distros can be a very enticing option for most businesses. By partnering with the right provider, you can unlock the power of containers and Kubernetes without needing to deal with all its complexity.
In addition, “going distro” gives you access to numerous sources of support. For example, a community of users for an established distro can support your own team by providing a pool of knowledge for troubleshooting and planning purposes. Distros that have paid support can go a step further, and connect you directly with the creators to help you unlock the power of your Kubernetes environment.
So, a distro offers a quicker and more reliable path to deploying your container platform. There’s only one nominal downside to this that pushes some people away from distros and towards a DIY solution – the spectre of vendor lock-in.
The myth of Kubernetes distro “lock-in”
The most common reason for a team eschewing a Kubernetes distro and instead going DIY is the fear of lock-in. Given how rapidly the enterprise IT landscape is changing, many teams understandably feel pressed to ensure that their container environment is as flexible as possible, which prompts many to choose DIY. The rationale is that if a team’s container platform is DIY, they’re not dependent on any one vendor’s architecture, so can quickly pivot when needed.
This can be short-sighted though. In truth, there’s no escaping lock-in to some degree: the question is who you’re locked in with. All of a DIY Kubernetes environment does is trade out lock-in with a vendor with lock-in with the team who developed that DIY solution, especially if that solution features unusual code or workarounds, or isn’t extensively documented.
If you have a smaller team and you lose the project lead on your DIY container platform, then you’re in a lot of trouble – you were effectively locked in with that person. Where a vendor can guarantee continuity with support with a Kubernetes distro, a DIY solution has no such thing.
Distros provide a clear pipeline of updates and improvements, whereas if you go DIY it’ll be on your team to continually work to keep up with the industry. Ultimately, what matters is delivering apps to customers, and if bespoke infrastructure impedes that goal, then that’s a problem.
How Kubernetes extends to machine learning (ML)
DIY is best for niche business cases
Earlier, I likened building a DIY container platform to reinventing the wheel in-house. But it’s probably more analogous to reinventing the automobile, given that a DIY solution will need you to have an extremely skilled team with a high degree of specialist knowledge in a variety of domains including storage, networking, security, and monitoring.
Instead of your team focusing on deliverables that generate value, i.e. delivering on business-oriented metrics, implementing DIY Kubernetes forces them to turn their attention to something that has no first-order impact on the ability to achieve business goals. The investment of time and capital to replicate the work already done by a distro is extensive, and as mentioned above, comes with risks of its own.
Of course, there are some instances where teams have a competitive advantage from a DIY environment. Although as more distros are released and mature, these cases are more scarce. But your team should be able to easily identify such cases, owing to the fact they’re likely operating very niche or unorthodox workloads or architecture in the first place. However, there are now a range of distros that cover the vast majority of use-cases for Kubernetes and containers, including those on the more “niche” end of the spectrum: if what you’re doing and running it off is so exceptional that you’re not covered by the diversity of distros, your team should be able to tell you up-front.
Given that a distro can guarantee quicker deployment of your container platform, has a higher bar in terms of security, and provides a clear pipeline of support, it’s very hard to justify a DIY solution in today’s market. Rather than having your team spend time recreating already-available container infrastructure, it’s almost always better to have your team focus their time on building the applications that your organisation needs to deliver on your business goals.