The container ecosystem is now a core part of many organisations’ technology roadmaps.
However, as with all emerging technologies, some counter-productive myths about containers have arisen. To help make container adoption as easy and safe as possible, clarity is essential and so it’s important that we debunk these myths. Here are three to be aware of:
Myth 1: ‘Containers are basically light-weight VMs’
This is true in a hand-wavy kind of way – but it’s important to think of containers more as an executable format than a virtual machine (VM). There’s no hardware abstraction penalty, the ecosystem is fully open-source and includes systems for managing images and orchestrating complex environments.
Containers are more of a Goldilocks abstraction – not too hot and not too cold, the right size and speed for fast feedback loops and deployments.
Myth 2: ‘Containers are insecure’
In the early days of container usage, this point had merit. But over the past few years, Docker, Red Hat, and the community at large have contributed loads of security-related features to the Docker runtime. These were designed for ‘enterprise-level’ security acceptance and include dropping capabilities, integration with SELinux and AppArmor, and secure computing mode (seccomp), among other things.
Of course, knowing what data is where and in what form is another fundamental security aspect. One best practice technique to avoid a scattering of container images spread across multiple services and systems, is to design, build, and maintain a standard container build and deployment pipeline. This pipeline should focus on building images, capturing metadata, running scans, and pushing images into a central container registry.
This common, shared pipeline approach has an important and welcome pre-requisite: to have a standard pipeline, you need standardised workloads. This means each containerised workload must adhere to coherent naming and versioning conventions, provide some metadata at build- and run-time (e.g. uniform status endpoints), and ship log and metric data consistently.
>See also: Everything you need to know about containers
Each workload must also follow twelve-factor app principles enough to run successfully as a pool of containers – and specifically, secrets or other sensitive data must never exist in plain text in the container image filesystem.
Once you have a standard pipeline and registry, you have an auditable source of truth for all images, can implement security and compliance checks, and really start to understand how people are using containers.
Myth 3: ‘I can only drive value from new applications’
You don’t have to limit your containerisation efforts to shiny new workloads. Some organisations we’ve worked with were able to get a lot of value by containerising legacy applications.
For example, a mission critical service written in PHP5 might have its own machine because of conflicting dependencies with new workloads in PHP7. Once containerised, the PHP5 app can run right next to the PHP7 workloads with no dependency issues.
If you are just getting started, you don’t have to go all in with a full end-to-end container orchestration platform. As long as you have a roadmap for adopting a system such as Kubernetes, there’s plenty of value in managing ‘dumb’ containers directly on individual hosts as an interim step.
>See also: Tackling security with container deployments
You get the value from the consistent container pipeline, easy dependency management, and ability to run the same containers anywhere. This approach works well even for some legacy applications that require a writable filesystem – use bind mounts and away you go.
Harness the potential
CTOs and other technology leaders should be excited about the future use of containers. Not only how they enable easier portability between cloud platforms and on-premises environments, and how technical teams can use them to gain a huge amount of freedom to implement the best-fit language or stack inside the container. But also the sheer amount of value that can be extracted from a common platform for building and storing runnable application artefacts.
Sourced by Iskandar Najmuddin, specialist architect at Rackspace