The Key to Container Control
Successful container deployments involve a concept that originated from control systems theory.
January 18, 2018
Containers remain hot as businesses try to unleash the benefits inherent to container architecture. Container ecosystems from the likes of Google, Docker, CoreOS, Mesosphere, and Joyent continue to grow and expand through the work of both adopters and contributors. Organizations across all major industries have taken to containers for infrastructure cost efficiency and application portability, scalability, and dev agility. Observability is key to successful container deployments because it goes hand-and-hand with controllability. If you can observe a system’s internals well, then you can equally control that system’s output well.
What is observability?
Observability is a measure of how effectively internal states of a system can be inferred from knowledge of its external outputs. It is a concept that originates from control systems theory. Currently, there are different opinions on what is sufficient for observability. Traditional IT professionals believe that metrics and logging are sufficient. IT professionals who focus on logging, however, believe that metrics are too noisy and make one susceptible to paralysis by over-analysis. Meanwhile, DevOps engineers associate observability with tracing microservices.
Who’s right? Who’s wrong? There’s so many observability shades of gray that finding common ground can be a challenge in and of itself. Application stacks are evolving with so much velocity, variety, and volume such that a combination of all three (metrics, logs, and traces) is needed to build a viable observability protocol.
Additionally, many organizations and IT professionals still struggle to understand both container technology and how to apply container architecture to their enterprise application portfolio. Trying to understand containers while implementing observability across the application stack can be a daunting task.
Container basics
To start, a container consists of an entire runtime environment -- an application, its dependencies, libraries and other binaries, and configuration files needed to run it -- bundled into one package designed for lightweight, short-term use. When implemented correctly, containers enable much more agile and portable software development environments.
The container model is not intended to be a long-term environment. Rather, containers are designed to be paired with microservices in order to do one thing very well and move on. With this in mind, let’s discuss some of their benefits.
containers-pixabay.jpg
First, containers spin up much more quickly and use less memory, leaving a smaller footprint on data center resources than traditional virtualization. This is important, as it enables process efficiency for the development team, which in turn leads to much shorter development and quality assurance testing cycles. With containers, a developer could write and quickly test code in parallel container environments to understand how each performs and decide on the best code fork to take. Containers should be ephemeral, which means they can be stopped, changed, and newly built with minimal set up and configuration.
Next, containers enable greater collaboration between multiple team members, who are all contributing to a project. Version control and consistency of applications can be problematic with multiple team members working in their own virtual environments and with their own versions of the code. Containers, on the other hand, drive consistency in the deployment of an image; combining this with a platform like GitHub allows for quick packaging and deployment of consistently known good images. The ability to quickly spin up mirror images of an application will allow various members of the same development team to test and rework lines of code in-flight, within disparate but consistent image environments that can ultimately synchronize and integrate more seamlessly.
Observability in container ecosystems
Containers aim to drive scalability and agility by normalizing the consistency of configurations and application delivery. Thus, automation and orchestration become key to successful container efficacy. An organization leverages containers to automate the provisioning of resources or applications to either run, deliver a service, or run and test a service before taking it to production, and to do it at web scale. At this scale, you need to orchestrate the workload to take advantage of the collaboration efficiency between all development team members.
Optimizing, automating, and securing containers at web scale require observability in order to control the outcomes. It’s the most efficient and effective way to troubleshoot and remediate at scale with agility. To determine how best to integrate observability and container technology into your existing environment, IT professionals need comprehensive monitoring that provides the single point of truth across the entire IT environment and application stack.
Containers offer the agility, availability, and scalability that organizations desire to enable their digital transformation dreams. However, with great power comes great responsibilities. To control containers at scale, practitioners need to integrate observability into their game plan. Otherwise, they should be prepared to deal with and clean up the web-scale mess left by uncontained containers.
Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!
About the Author
You May Also Like