Everything you need to know about containers

To cope with an increasingly networked and interconnected world, industrial automation is evolving to incorporate many of the new or repurposed technologies underpinning general-purpose computing.

For example, the Internet of Things (IoT) represents a number of technology trends coming together to make solutions more practical – low-power and inexpensive processors for pervasive sensors, wireless networks, and the ability to store and analyse large amounts of data, both at the edge and in centralised data centres.

The IoT opens vast possibilities for information gathering and automation. This in turn gives rise to new opportunities to innovate, increase revenues, gain efficiencies, and actually extend the scope of machine and human impact.

One of the key technologies being deployed to enable the easy deployment and isolation of applications running in both gateway devices and back-end servers is Linux containers.

>See also: Containerisation: the winning strategy for BYOD mobility

Containers provide lightweight and efficient application isolation, and packages them together with any components they require to run. This avoids conflicts between apps that otherwise rely on key components of the underlying host operating system.

According to a Forrester report commissioned by Red Hat, the benefits of containers are broad in scope with higher quality releases (31%), better application scalability (29%), and easier management (28%) cited as among the top three reasons to adopt containers.

“That the top benefits cited are so spread out is a testament to the broad appeal of containers to businesses with various objectives,” Forrester noted.

By packaging applications together with the components they depend on to run, containers provide a consistent environment – making applications more portable and eliminating difficult-to-fix conflicts.

Containers are part and parcel of the set of technologies and practices through which new applications are being developed in industrial automation and elsewhere. The lightweight isolation provided by containers allows them to be used to package up loosely-coupled services that may perform only a single, simple function such as reading a sensor, aggregating some data or sending a message. These small independent services that can operate independently of each other are often called ‘microservices’.

Microservices avoid many of the pitfalls of more monolithic and complex applications in that the interfaces between the different functions are cleaner and services can be changed independently of each other.

Services are, in effect, black boxes from the perspective of other services. So long as their public interfaces don’t change and they perform the requested task, they can be changed in any way the developer sees fit. Other services don’t know – and should not know – anything about the inner workings of the service.

These clean interactions in turn make it easier for small teams to work on individual services, test them, and do rapid and iterative releases. This makes it easier to implement DevOps, which is an approach to culture, process and tools for delivering increased business value and responsiveness through rapid, iterative, and high-quality service delivery.

Thus containers, microservices and DevOps – while, in principle, independent things – mutually support and enable each other to create a more flexible and efficient infrastructure, applications that make the best use of that infrastructure, and a process and culture that develops and deploys those applications quickly and with high quality.

For example, the Forrester study also found that containers provide an easier path to implementing DevOps, especially in concert with additional tools. Forrester researchers wrote: “Organisations with configuration and cluster management tools have a leg up on breaking down silos within the software development life cycle.”

Almost three times (42% vs. 15%) of the organisations using such tools identified themselves as being aligned with DevOps compared to organisations using containers alone.

From a technical perspective, services running in Linux containers are isolated within a single copy of the operating system running on a physical server (or, potentially, within a virtual machine).

This approach stands in contrast to hypervisor-based virtualisation in which each isolated service is bound to a complete copy of a guest operating system, such as Linux. The practical result is that containers consume very few system resources such as memory and impose essentially no performance overhead on the application.

One of the implications of using containers is that the operating system copies running in a given environment are essentially acting as a sort of common shared substrate for all the applications running above. The operating system kernel is shared among the containers running on a system, while the application dependencies are packaged into the container.

The operating system is therefore not being configured, tuned, integrated and ultimately married to a single application as was the historic norm, but it's no less important for that change.

In fact, because the operating system provides the framework and support for all the containers sitting above it, it plays an even greater role than in the case of hardware server virtualisation where that host was a hypervisor.

All the security hardening, performance tuning, reliability engineering and certifications that apply to the virtualised world still apply in the containerised one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation than in the case where a hypervisor is handling some of those tasks.

This means, for example, that you should use available Linux operating system capabilities, such as SELinux, to follow best practices for running containerised services just as if they were running on a conventional bare metal host. This means doing things like dropping privileges as quickly as possible and running services as non-root whenever possible.

Organisations are also moving toward a future in which the operating system explicitly deals with multi-host applications, serving as an orchestrator and scheduler for them. This includes modeling the app across multiple hosts and containers and providing the services and interfaces to place the apps onto the appropriate resources.

In other words, Linux is evolving to support an environment in which the computer is increasingly a complex of connected systems, rather than a single discrete server.

There is absolutely an ongoing abstraction of the operating system. Organisations are moving away from the handcrafted and hardcoded operating instances that accompanied each application instance – just as they previously moved away from operating system instances lovingly crafted for each individual server.

Applications that depend on this sort of extensive operating system customisation to work are not a good match for a containerised environment. One of the trends that makes containers so interesting today in a way that they were not (beyond a niche) a decade ago is the wholesale shift toward more portable and less stateful application instances.

The operating system's role remains central – it’s just that you’re using a standard base image across all of your applications rather than taking that standard base image and tweaking it for each individual one.

>See also: How to tackle the 7 mobile app security deadly sins

In addition to the operating system’s role in securing and orchestrating containerised applications in an automated way, it’s also important for providing consistency (and therefore portability) in other ways as well.

For example, true container portability requires being able to deploy across physical hardware, hypervisors, private clouds and public clouds. It requires safe access to digitally signed container images that are certified to run on certified container hosts. It requires an integrated application delivery platform built on open standards from application container to deployment target.

Add it all together and applications are becoming much more adaptable, much more mobile, much more distributed, and much more lightweight. Their placement and provisioning is becoming more automated. They’re better able to adapt to changes in infrastructure and process flow driven by business requirements.

This requires the operating system to adapt, as well, while building on and making use of existing security, performance and reliability capabilities. Linux is doing so in concert with other open source communities and projects to not only run containers but to run them in a portable, managed and secure way.

 

Sourced from Gordon Haff, Red Hat

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Containers