Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Linux Container Success Hinges On Standards

We’ve seen Linux containers explode onto the technology scene over the past 18+ months, primarily because Linux containers bring together development and operational technologies in a way that hasn’t been possible to date. Containerization is, in many ways, the technology complement to the DevOps processes that have been established over the last six years.  

However, many promising technologies don’t wind up having as meaningful an impact on the IT industry as they could, often due to the lack of stable standards around key components. Even with all the industry excitement, Linux containers are no exception and without the establishment of standards, the pace of innovation will inevitably slow and limit their incredible potential.

For those of us who are veterans of the tech industry, arguing the need for standards might sound strange. Standards are often thought of as an impediment to technology development and not a fast lane. Traditionally, standards bodies recruit membership from dozens of companies and then vote on additions with a very real risk that the end results will be as effective as many modern-day legislation attempts -- meaning not at all. This is not from a lack of effort, but rather because the traditional structure often incorporates too much scope and too many opinions to be successful.

But this isn’t always the case. Standards around specific technology components that bring stability and predictability to the evolution of a technology can accelerate innovation by increasing the number of participants in the ecosystem. Take the Linux kernel as an example. A focus on stable kernel APIs has allowed a vast ecosystem of userspace content to be developed that now serves as the basis for almost all cloud innovation. Another example is Java: The structure around the Java Community Process has provided a way to expand the Java platform at a scale few would have thought possible a decade ago.

If Linux containers are going to replicate the commercial success of Linux and Java, establishing strong standards to allow for a greater number of individuals and companies to safely innovate is critical. Those standards will serve as the foundation on which ideas will be developed that we couldn’t even imagine today. It’s just like building a house: picking the right location and building a strong foundation are key, with the foundation being the difference between a rambler and a skyscraper.

So where should the industry establish standards for Linux containers? I’ve had the opportunity to work in this area for several years now and there are fairly clear boundaries for the core technologies in containers:

Container format. While this is often overlooked, it's one of the lowest level and most important components. The format establishes how you package contents into a container, share common contents across multiple containers and establish trust around the content inside your containers. Similar to RPM or Deb packages in the Linux ecosystem, this layer will be the difference between trusting the containers running on your system versus giving away root access to them. And obviously, without trust, adoption will suffer.

A recent study demonstrated that more than 30% of the official Docker images on Docker Hub contain high-profile security vulnerabilities. For images pushed by individual users, the percentage of vulnerable images jumps to 40%. This isn’t a shortcoming of Docker Inc. or the Docker Hub, but instead highlights how critical it is to know the source of your containers and verify their authenticity.  If you can’t trust the source, you run the risk of exposing your production environment to unknown third parties.

Container runtime. Once you have a container format that you would like to run, understanding how it will be run on your system is critical. Different distributions of Linux cater to different technologies (such as sysvinit vs systemd) and often your operational processes are built around those technologies. Being able to run containers based on a common format in a manner that caters to your distribution and your operational knowledge is essential: It minimizes the learning curve, allowing sysadmins to ease into using the new technology without having to relearn everything at once.

Container orchestration. Soon after you’ve containerized your first application, you quickly realize that few applications will be built on a single container. Composable applications will be built using many containers and you will need to instruct these containers to act as a unit. This process of orchestrating changes to the containers that are serving an application establishes the final standardized layer; without a way to describe multi-container applications, you end up with a cool technology that has little real use.

Establishing standards and stability for these components allows choice in implementation, which in turn drives innovation. This is often a hotly contested area because choice and divergence could mean a lack of predictability from which the overall user experience will suffer. Good standards, however, help provide enough of the foundation to provide consistency without rigidity.

Take a standard like SQL: A quick study on the basics of the SQL standard will give you a working knowledge of most relational databases, regardless of the vendor or implementation. Each database provides tons of implementation-specific improvements in addition to competing on implementation of the spec, but having the stable specification and choice of technology has allowed for a massive ecosystem to develop.

With Linux containers, we have the same opportunity. In the format component, implementations range from tarballs and overlay filesystems to device mapper technologies. They all serve the same purpose -- allowing content to be effectively shared, but all utilize unique capabilities of the operating system. The same goes for the runtime; a golang daemon or a systemd extension provide different advantages to different users.

The key point is that allowing innovation to occur in parallel in areas where individuals and companies have an interest or desire is critical to progress. If done right, I believe that we will see Linux containers become one of the fastest growing technologies since the Linux kernel and Java. 

Editor's note: Keep an eye out for more developments on container standards. Docker, CoreOS and 18 others have teamed up behind a common container specification. Read about it here.