DATA CENTERS

  • 06/22/2015
    7:00 AM
  • Rating: 
    0 votes
    +
    Vote up!
    -
    Vote down!

Container Networking: Challenges & Requirements

The Docker networking model falls short in meeting enterprise requirements. Here are key considerations for container networking.

At the recent OpenStack Summit in Vancouver, more than 6,000 attendees were polled about their use of containers. A fraction of the audience raised their hands -- maybe 5%, by my estimation -- when asked if they have containers in production. But almost every hand shot up when asked who is looking at moving containers to production over the next few years.

That show of hands illustrates that while containers are an early-stage technology, organizations have big plans for them. With multiple options like Docker and Kubernetes, users are carefully considering their adoption path. Containers offer clear advantages: simplified and faster application deployment, portability and a lightweight footprint. But there are also risks.

One risk is integration. Open-source communities are looking at how to integrate containers into existing cloud and automation frameworks such as Magnum and several OpenStack projects. Another issue is container networking. Users often ask me how to design a networking solution that supports both containers and VMs. VMs and containers (and bare-metal deployments) usually present very different models for networking.

Let’s take a step back and look at how container networking works today. Container networking is based on a simple architecture, primarily a single-host solution. For example, the Docker networking model is based on a few simple assumptions:      

  • It leverages a local (within each host) Linux Bridge to which containers are connected.
  • Each compute node has an IP address that is visible by the cluster.
  • Each container has a private IP address that is NOT visible by the cluster.
  • NAT is used to bind the container private IP to the compute node public IP.
  • Additionally, a load balancer can be used to map a service to a set of IPs and ports.
  • iptables are used for network segmentation and isolation between applications and tenants.

This model can fall short on a variety of aspects when applied to container networking. It limits the ability for a multi-tenant cloud solution built with containers to scale spanning multiple hosts. HA configurations are limited. Consistent connectivity and security regardless of mobility of the workloads also are problems. And, the combined use of iptables and NAT restricts scalability and performance, wiping out one of the primary advantages of using containers.

So what should a networking solution for containers provide? And, how should you evaluate the fit of a solution to a specific application? Let’s break this down into three questions; the answers should help you better understand what’s unique about container networking.

1. What type of application will you be running on your container-based infrastructure?

This directly impacts the blueprint of your network infrastructure and how it will be created. Will your applications require rich network topologies with advanced services? Are they multi-tenant, or will simple “flat networks” be sufficient?

A virtual networking solution for containers allows both the end user (tenant) and the cloud operator to define and control their network needs. The solution must also provide the constructs needed for micro-segmentation and isolation across multiple physical hosts.

2. What are your performance and scalability requirements?

As you think about this question, consider the requirements of your application on the infrastructure. Consider a solution that provides isolation and network functions in a fully distributed architecture to pave the way for growth and expansion of your application. The networking solution should scale out across multiple physical hosts as the cloud deployment grows and be tightly integrated within the cloud orchestration framework.

3. Will you need to interconnect containers with VMs and bare-metal workloads?

Most applications will require support for hybrid workloads with containers, so look for a solution that covers both. A consistent abstraction model (networks, subnets, routers, interfaces, floating IPs) and a consistent set of APIs for configuration and automation is a way to get this done.

Cloud users are calling for a convergence of networking models for any workload combined with powerful networking abstractions to simplify container-to-container communication and add advanced network functionality and micro-segmentation.

What are your challenges and requirements for networking and containers? Please share your thoughts in the comments section below. I’d love to hear them!


Comments

Re: Container Networking: Challenges & Requirements

At present, container hype still seems to outweigh container production use - but the technology isn't going away any time soon. Giving a little context, explaining the broad concepts at play, and then drilling down and explaining how those concepts impact real and specific production environments, is exactly what I'm looking for in an article like this. When you mention solutions for hybrid workloads, or virtual networking solutions that cater to containers, Valentina, do you have any specific vendors or products in mind? Many enterprises are favoring a wait-and-see approach, but for those brave enough not to, do they have to build everything themselves, or are some standards starting to emerge?

Security tops the list again and again when people talk about container concerns, and while security should always be on our minds, I wonder if we're not getting a little ahead of ourselves; has there been a high-profile breach involving containers yet? Have there been any? Rocket has been happy to challenge Docker's supposed security issues, but how much of a selling point can this really be, when, as you said, only 5% of people out there (that's 5% of people at an OpenStack conference) are even using containers in a production environment? Charlie Babcock posted today about how the two companies now seem to have banded together (along with others) to make a common container standard that's presumably more secure for everyone - that seems to me like a better approach than an arms race to be the most barely acceptable in terms of security.