SDN Lessons From Assembly Lines
Infrastructure standardization is key to achieving the cost benefits of SDN.
July 6, 2016
When software-defined networking (SDN) first blasted its way onto the scene, many experts pointed to the classic “30/30/3” problem as the reason for its existence. The problem, if you aren’t familiar, is a generalization of the relationship between data growth, IT costs, and revenue. It posits that the problem with IT is that data and IT costs grow at 30%, while revenue experiences much smaller growth, only 3% a year.
SDN was conceived as a potential solution because it was partially promoted as lowering the IT costs associated with the traditionally manual methods of managing the network. SDN was going to do this by operationalizing the network: What we now often refer to as applying a DevOps approach to delivering the network services apps need, using more automated toolsets.
Now, the problem with that approach is that there’s very little standardization in the network. Unlike app dev environments in enterprises that are largely standardized on a few platforms hosting apps written in a limited array of languages, the network is the Wild West. Comprised mainly of custom-built hardware, which is necessary to achieve the speeds, feeds, and capacity required to support the massive growth of apps and users that fuel today’s digital economy, each platform and system is an entity unto its own.
Each has its own configuration paradigm, command-line interfaces (CLI), object models and APIs. That means the API you use to automatically configure basic networking services like VLANs, routing, ACLs is not the one you use to manage firewall rules, nor is it the same one you use to provision and manage load balancing, caching, DNS, or any of the other 20-odd services that likely are running in the data center.
It’s this kind of complexity that contributes to the 30/30/3 problem and makes it hard to solve, even with software-defined tools and frameworks becoming common in data centers everywhere.
This problem is not going away. The decomposition of applications and emerging app architectures like microservices and server-less are going to continue to stress infrastructure in new and existing ways. I am not the first to say that microservices and similar emerging architectures reduce complexity in the app environment by shifting it into operations.
For every new microservice or app, there are a growing number of network services that must be provisioned and managed. If there are even just five network and app services per microservice, then as that app decomposes, the growth in infrastructure is linear. Two becomes 10, three becomes 15, and so on. The speed gained by moving to such architectures is lost when it slams into the network.
Simply automating the provisioning and configuration and orchestrating the deployment process in production is not necessarily going to address a30% growth in costs. The disparity in APIs and object models makes it increasingly difficult to standardize across the data center in a way that will actually provide real relief to this existential problem.
This service sprawl is made even more challenging by the migration of existing applications into cloud environments. Network and infrastructure teams are now tasked with not only managing increased complexity forced by sheer volume, but also by operating in very different environments using dissimilar toolsets and APIs.
Both SDN and cloud have core benefits that are derived largely from the notion of abstraction; that is, they are designed to reduce complexity by offering a unified (consistent) interface that enables tools, frameworks, and people to easily provision and manage services across the infrastructure. The issue on-premises, for infrastructure and network professionals, is that they must build that abstraction layer. They must interact with 15 to 20 different APIs and object models, whether they’re building their own cloud or leveraging an orchestration stack.
One way to address this is by standardization: Carefully choosing the platforms that provide the network and app services, and eliminating as much variability as possible. One of the tenets of Six Sigma, a lean approach used for both manufacturing and app dev project management, is based on the premise that reducing variability results in fewer errors. In manufacturing, this means fewer defects. In IT, this means fewer outages and greater stability, which is ultimately as important in the network as time to market is for apps.
factory.png
We often use the analogy of Ford and his assembly line as proof positive that automation reduces costs and improves time to market. But we forget that part of the magic of the assembly line was, in fact, standardization: Standard parts, standard methods, and standard tools. If we’re to achieve the same results from out digital assembly lines, we have to examine our infrastructure with an eye toward standardization.
Doing so can reduce the impact of service sprawl, speed development delivery, and provide a more consistent, manageable means of provisioning and managing the network and infrastructure services required. Perhaps then, it can even make the 30/30/3 problem not so much a problem anymore.
About the Author
You May Also Like