Kubernetes, Containerisation and tech history repeating itself

Technology can seem to run in cycles.

Over the the past couple of decades of IT, most of the focus has been around taking disparate elements of an organisation’s infrastructure and bringing them together into something much simpler. But now, with greater focus on applications and containerisation, it can feel like we’re breaking them all up again.

However, it’s important to focus on the common thread that connects each big technology and infrastructure trend. The key thing in all this is that these changes have improved cross-functionality, communication, and collaboration across a business. So even if it feels a little like the latest trend is undoing something that’s already been done, in reality we’re moving forward and improving on what was there before.

A long history

Containers themselves aren’t actually that new, of course – they’ve been around since the 1970s in some shape or form. Back in 1979, during the development of Unix V7, the chroot system was introduced, which was effectively the first incidence of the process isolation that lies behind containers. Fast-forward to 2004 and the release of the first public beta of Solaris Containers was a big moment – this was effectively virtualisation before VMWare.

Over the next decade things became more accessible as the open-source community got hold of containers and began to apply standards. By the early 2010s, containerisation was ready to go mainstream and the launch of Docker in 2013 did the trick, helping the technology explode in popularity.

The success of containers at that time can be partly explained by VMWare’s rise over previous years. Virtualisation meant that organisations no longer had to run on a one-application-per-server basis, paying for inefficient data centres with thousands of servers all running an application each. VMWare allowed companies to virtualise these servers and run multiple applications on one bit of hardware – a true game-changer.

Bringing everything together like this, though, can cause fresh issues – namely that you’d end up wasting a lot of your servers’ capabilities. That’s where containers came in – when an app runs in a container, that container only has what it needs. And, of course, you can create multiple containers in one server.

Everything you need to know about containers

‘Containers are part and parcel of the set of technologies and practices through which new applications are being developed in industrial automation and elsewhere’. Read here

Divide and conquer

Containers have proved vital for some of the essential tech functions modern businesses rely on. Let’s say your company has a shopping application, where customers can browse for clothes, make payments and manage their account. In the old days, the whole application would be in one monolithic contained module on your system, meaning that if you wanted to update one app function – such as payments, the catalogue or search – you’d have to take the whole thing down. But if, by using containers, you break down the application into modules for each function, everything can be scaled, updated or tweaked in isolation.

But with, say, five containers for one application, scaling can become a little tricky. Thankfully, Kubernetes can solve all this by managing containerised workloads and services. It’s essentially the heart and brain of scaling these kinds of things up.

Automated assistance

Kubernetes are powerful enablers of dev ops and agile ways of working. Containers that run applications need to be managed to ensure that there’s no downtime – if one goes down, then another container needs to step up to take its place. This is easier to manage when handled by an automated system, and Kubernetes provides a framework that takes care of failover and scaling for an application, as well as functions like deployment.

However, as we’ve seen with other technologies, Kubernetes aren’t the answer to everything. A 20-year-old CRM, for example, isn’t something that can just be moved overnight – it would cost a fortune. Businesses need to speak to outside voices that can help them understand what they can and can’t do.

When we went from physical to virtual, many companies virtualised everything and then realised they had to keep some elements physical. The same thing happened with cloud and it’s beginning to happen with containers, too.

We don’t have to see history repeat itself again. It’s about picking the right technology for the right situation and business need, and organisations that stay focused on improving cross-functionality, communication, and collaboration will be less likely to find themselves unpicking expensive mistakes down the line.

Written – as part of a paid content campaign with CDW – by Ashminder Ubhi, Category Lead for Core Data Centre, CDW

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com