How and why containers should be the way forward for enterprises

Nowhere in the world of open source technology has the market grown as much as with containers. 451 Research predicts growth of more than 250% in this market between 2016 and 2020, and it’s not hard to see why.

Container technology combines the security of traditional virtual machines, with new-found speed and density, as well as not requiring enormous operating systems to run.

It is important to remember though that this technology is still very much in its infancy. Plenty of customers are still asking questions along the lines of, “does any of this container stuff actually work as secure technology that can be used in production for an enterprise environment?” Just like OpenStack, plenty of question marks face this new and exciting technology on its way to market maturity and widespread revenue generation.

>See also: Enterprises using IoT aren’t securing sensitive data – Thales

Virtual machines to containers: the journey

First, some background. In previous years, virtual machines have been able to provide a solution to workflow expansion and cost reduction for many companies, but they do have limits.

For example, virtual machines have a far smaller capacity compared to containers, in terms of the amount of applications they are able to put into a single physical server.

Virtual machines also use up system resources; each virtual machine runs on a full copy of an operating system, as well as a virtual copy of all the hardware that the operating system needs in order to function.

Containers offer a new form of virtualisation, providing almost equivalent levels of resource isolation as a traditional hypervisor. However, containers present lower overhead both in terms of lower system footprint and higher efficiency. This means that higher density can be achieved – simply put, you can get more for the same hardware.

The next stage: enterprise adoption

The telco industry has been at the bleeding edge of adopting container technology. Part of the catalyst for this trend has been the NFV (network function virtualisation) revolution – the concept of telcos shifting what were traditionally welded-shut proprietary hardware appliances into virtual machines.

We certainly do see virtual machines being used in production in some telcos, but containers are actually a stronger fit in some cases; the performance is even better when it comes to NFV applications.

>See also: The continuous enterprise in a fast paced world

Developers in enterprise environments are aware that containers offer both higher performance to the end user, as well as operational efficiency for the cloud administrator.

However, many CIOs are still unsure that containers are the best option of technology for them, due to wider market misconceptions. For example, some believe that by using one particular type of container, they are going to tie themselves into a specific vendor.

Security concerns

Another common misconception that might present an obstacle to enterprise adoption is the concept of security. However, there are several controls in place that enable us to say, with confidence, that an LXD Container is more than secure enough to satisfy the CIO that is, understandably, more security-conscious than ever.

One of these is resource control, which, inside of a Linux kernel, is provided by a technology called cgroups (control groups), originally engineered at Google in 2006. Cgroups is the fundamental technology inside of a Linux kernel that groups processes in a certain way, ensuring that those processes are tightly coupled. This is essentially what a Docker or LXD
container is – an illusion that the Linux kernel creates around the group of processes that makes them look like they belong together.

Within LXD and Docker, cgroups allows you to assign certain limiting parameters, for example, CPU, disk storage or throughput. Therefore, you can keep one container from taking all of the resources away from other containers. From a security perspective, this is what ensures that a given container cannot perform a denial of service (DDoS) attack against other containers alongside it, thereby providing quality of service guarantees.

>See also: Gartner identifies the top technologies for security in 2017

Mandatory access control (MAC) also ensures that neither the container code itself, nor the code run within the containers, has a greater degree of access than the process itself requires, so the privileges granted to rogue or compromised process are minimised.

In essence, the greatest security strength of containers is isolation. Container technology can offer hardware-guaranteed security to ensure that each containerised machine cannot access one another. There may be situations where a virtual machine is required for particularly sensitive data, but for the most part containers deliver security. In fact, Canonical designed LXD from day one with security in mind.

IoT is setting the trend

Many of the notable trends dominating the tech news agenda in the last couple of years, particularly the Internet of Things, are pushing the shift towards enterprise adoption of Containers. Container technology is arguably the ideal response to scalability and data-related issues presented by the predominance of IoT applications.

>See also: Hybrid IT is essential for enterprise innovation – SUSECON 17

Containers, in tandem with edge computing, are optimised for enabling the transmission of data between connected devices and the cloud. Harvesting data from any number of remote devices and processing it calls for extreme scaling. Application containers, with the help of tools such as Docker and Ubuntu Core, which runs app packages for IoT known as “snaps”, can help provide this.

Why should you choose containers?

There is no doubt that virtualisation technology will never be the same again following the containers revolution. Not only will this software improve performance and data centre efficiency (without the need for additional investments in infrastructure), organisations that implement this technology will discover it brings them considerable improvements in terms of speed, efficiency and agility within their IT environments.

Containers offer a faster, more cost-effective and, above all more efficient way of creating an infrastructure for Linux-on-Linux workloads. The brand-new code, written using modern advances in development discipline and technology, will be an advantage for most companies.

>See also: Ease of use: software-defined storage for the enterprise

While many still view containers as solely a solution for small to medium organisations, established enterprises in all industries and of all sizes can use this technology to channel the disruptive spirit that will allow them to keep up with more agile and scalable new kids on the block.

 

Sourced by Marco Ceppi, Ubuntu Product & Strategy Team, Canonical

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...