Where DevOps Meets The Network

Network architects and operators need to work with application developers and operators to achieve optimal application performance.

Lori MacVittie

June 3, 2015

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

There's a belief -- or maybe it's  more of a perception -- that high-speed networks pretty much eliminate performance issues. After all, if the packets are traveling faster, performance gets better, right? The rabbit is going to beat the turtle.

Only we all know he doesn't, right? 

That's because he stopped. Many times. The reason he stopped is irrelevant to this discussion. The reality is he took breaks and naps and pretty much destroyed his "high speed" performance with lots and lots of stops along the way.

And that’s where this analogy becomes an overlay on the network (pun intended). Because even with the incredible network speeds we have today -- and the increases still coming -- we still have latency.

Latency is, in its simplest definition, the amount of time it takes for data to get from one place to another. For some, this means the time it takes for a client to get a response from its respective app. But for those architecting the app and the network, it's also about the time between hops in the network. And not just in the network as in between routers and switches, but in the network, meaning between the services that deliver the apps like security, load balancing, and caching.

In a traditional architecture, there is X latency caused by network propagation, transmission and processing by intermediaries. We could do the math, but this isn't graduate school and to be honest, I hated those calculations. Suffice to say every "bump" in the wire (hop) introduces latency, period. And every protocol introduces its own amount of latency on top of the base latency from IP communications. All that latency adds up to the total response time of an application, which we try valiantly to keep under five seconds because users are demanding.

To make this more difficult, emerging architectures like microservices increase that latency by introducing even more stops along the way. A single app may suddenly be comprised of 10 services, each with its own set of network services. Ten times the services, 10 times the latency.

This is where DevOps meets the network: where the notion of application performance management must include those outside app dev and even ops. Ops in the traditional sense, meaning compute or application infrastructure, cannot alone address this key aspect of application performance. It's not just about the app or even its services, it's also a topological (and therefore network architecture) issue.

Sure, ops can tweak TCP and leverage multiplexing or adopt HTTP/2 or SPDY to improve round-trip times and reduce performance-killing latency, but that cannot and does not impact the latency inherent in the network architecture. Too many hops and too much hairpinning (or tromboning, depending on your preference) are going to impact performance irrespective of the efforts of ops and dev.

Figure 1:

That's why the network (as in network ops) has to get involved with ops and dev. It's imperative that the network architects and operators understand the architecture of the application and its requirements for network and application services as a way to better orchestrate the flow of traffic from one end to another. It isn't enough for network ops to just rapidly provision and configure those services, they need to be optimally provisioned and configured in a way that addresses the performance needs of the application.

This is why DevOps is often discussed in terms of its cultural impact on the organization, because it's not enough to automate and orchestrate individual silos of functionality across the application delivery spectrum. The automation and orchestration should not just be supportive of the need to work faster, but also of the need to work smarter.  This requires collaboration with ops and dev to ensure that the stops along the way are minimized and organized in the most optimal, performance-supporting manner possible.

And that’s a cultural change, and one that needs to occur in order for the business to meet and (one hopes) exceed expectations in the application economy. 

About the Author

Lori MacVittie

Principal Technical Evangelist, Office of the CTO at F5 Networks

Lori MacVittie is the principal technical evangelist for cloud computing, cloud and application security, and application delivery and is responsible for education and evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she authored articles on a variety of topics aimed at IT professionals. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She also serves on the Board of Regents for the DevOps Institute and CloudNOW, and has been named one of the top influential women in DevOps.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights