Container Networking: Challenges & Requirements
The Docker networking model falls short in meeting enterprise requirements. Here are key considerations for container networking.
June 22, 2015
At the recent OpenStack Summit in Vancouver, more than 6,000 attendees were polled about their use of containers. A fraction of the audience raised their hands -- maybe 5%, by my estimation -- when asked if they have containers in production. But almost every hand shot up when asked who is looking at moving containers to production over the next few years.
That show of hands illustrates that while containers are an early-stage technology, organizations have big plans for them. With multiple options like Docker and Kubernetes, users are carefully considering their adoption path. Containers offer clear advantages: simplified and faster application deployment, portability and a lightweight footprint. But there are also risks.
One risk is integration. Open-source communities are looking at how to integrate containers into existing cloud and automation frameworks such as Magnum and several OpenStack projects. Another issue is container networking. Users often ask me how to design a networking solution that supports both containers and VMs. VMs and containers (and bare-metal deployments) usually present very different models for networking.
Let’s take a step back and look at how container networking works today. Container networking is based on a simple architecture, primarily a single-host solution. For example, the Docker networking model is based on a few simple assumptions:
It leverages a local (within each host) Linux Bridge to which containers are connected.
Each compute node has an IP address that is visible by the cluster.
Each container has a private IP address that is NOT visible by the cluster.
NAT is used to bind the container private IP to the compute node public IP.
Additionally, a load balancer can be used to map a service to a set of IPs and ports.
iptables are used for network segmentation and isolation between applications and tenants.
This model can fall short on a variety of aspects when applied to container networking. It limits the ability for a multi-tenant cloud solution built with containers to scale spanning multiple hosts. HA configurations are limited. Consistent connectivity and security regardless of mobility of the workloads also are problems. And, the combined use of iptables and NAT restricts scalability and performance, wiping out one of the primary advantages of using containers.
So what should a networking solution for containers provide? And, how should you evaluate the fit of a solution to a specific application? Let’s break this down into three questions; the answers should help you better understand what’s unique about container networking.
1. What type of application will you be running on your container-based infrastructure?
This directly impacts the blueprint of your network infrastructure and how it will be created. Will your applications require rich network topologies with advanced services? Are they multi-tenant, or will simple “flat networks” be sufficient?
A virtual networking solution for containers allows both the end user (tenant) and the cloud operator to define and control their network needs. The solution must also provide the constructs needed for micro-segmentation and isolation across multiple physical hosts.
2. What are your performance and scalability requirements?
As you think about this question, consider the requirements of your application on the infrastructure. Consider a solution that provides isolation and network functions in a fully distributed architecture to pave the way for growth and expansion of your application. The networking solution should scale out across multiple physical hosts as the cloud deployment grows and be tightly integrated within the cloud orchestration framework.
3. Will you need to interconnect containers with VMs and bare-metal workloads?
Most applications will require support for hybrid workloads with containers, so look for a solution that covers both. A consistent abstraction model (networks, subnets, routers, interfaces, floating IPs) and a consistent set of APIs for configuration and automation is a way to get this done.
Cloud users are calling for a convergence of networking models for any workload combined with powerful networking abstractions to simplify container-to-container communication and add advanced network functionality and micro-segmentation.
What are your challenges and requirements for networking and containers? Please share your thoughts in the comments section below. I’d love to hear them!
About the Author
You May Also Like