AI Applications Put More Focus on the Network

AI is going to impact the network in more ways than one. The expansion of the N-S traffic path and the fight for AI compute interconnection are two of the most important right now.

Lori MacVittie

June 19, 2024

3 Min Read
AI is going to impact the network in more ways than one including the expansion of the N-S traffic path and the fight for AI compute interconnection.
(Credit: Sergey Skleznev / Alamy Stock Photo)

There’s a lot going on with AI right now. You can’t read an article or blog without it being mentioned. A lot of the chatter around AI is focused on the user experience and how it will change our lives and jobs from a productivity standpoint. This is all true, but there isn't as much chatter about the impact AI applications are having on the network. And trust me, it is.

Our most recent research indicates organizations are allocating, on average, 18% of their IT budget just for AI. Of that, 18% is going to models, and 9% is being spent on GPUs. Organizations are clearly investing in building out their AI capabilities, with a healthy percentage (60%) buying up GPUs to put on-premises. That implies some rather significant changes to the data center.

That’s probably not a surprise because the entire data center architecture is being impacted by AI. Specifically, by the way AI compute complexes are being designed and built out (that’s what those GPUs are for, after all). There’s a lot going on, including a new spat brewing between, well, everyone and NVIDIA. It’s akin to the protocol wars that led to TCP/IP becoming the standard for networking in the data center.

And while this spat is focused on the GPU interconnects within an AI “pod” (basically a Kubernetes cluster comprised of inferencing servers and a lot of GPUs and CPUs and other PUs) there’s also the new interconnect between these AI compute complexes and the rest of the data center.

AI-datacenter-traffic.jpg

AI drives data center bifurcation

The data center is bifurcating into two distinct computing complexes; one focused on applications (that may or may not use AI) and an AI compute complex in which inferencing is run at scale.

There’s a lot to unpack here, not the least of which is the introduction of a second N-S insertion point inside the data center for AI traffic. That’s the point at which traffic moves from the existing data center to the AI compute complex. Like any new interconnect, there are a number of network and application layer needs that must be addressed, such as:

  • Dynamic routing integration for resiliency and service load distribution

  • Per-tenant network isolation for ingress and egress for any K8s workload

  • Advanced CGNAT capabilities

  • Per-tenant network firewalling

  • SIEM integration

  • IDS services

  • L4-L7 SLA and throttling

There are all the expected network services required for traffic traversing two 'environments' as well as the helpful higher-layer network (L4-7) traffic management to keep inferencing servers from being overwhelmed by requests.

Notice most of these aren’t your typical “ingress control” functions, despite the ten-thousand-foot view that traffic is likely passing from one Kubernetes cluster to another. The interconnect here requires something more robust than just routing. The focus is clearly on network capabilities common to most data center interconnects.

Per-tenant network isolation may not be a typical enterprise requirement, but AI is making it more than just a service and cloud provider necessity. Per-tenant based networking and architecture constructs will be increasingly important in the AI era to enable enterprises to prioritize AI workloads—like automation and operational analytics—to make sure they are not suffocated by lower-priority AI workloads.

I’ve talked (and talked and talked) about the need to modernize infrastructure in the past. That includes—and indeed focuses on—the network. There are no applications, no tools, or technologies that do not rely on the network today. All applications, and especially AI applications, need a flexible foundation on which to deliver their digital value to users. That foundation is the network, and it's one of the reasons we tag infrastructure distributedness and automation as key technical capabilities needed for any organization to become a digital business.

AI is going to impact the network in more ways than one. But the expansion of the N-S traffic path and the fight for AI compute interconnection are two of the most important right now. Because together they will determine the direction of next generation data center architectures.

Related articles:

About the Author(s)

Lori MacVittie

Principal Technical Evangelist, Office of the CTO at F5 Networks

Lori MacVittie is the principal technical evangelist for cloud computing, cloud and application security, and application delivery and is responsible for education and evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she authored articles on a variety of topics aimed at IT professionals. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She also serves on the Board of Regents for the DevOps Institute and CloudNOW, and has been named one of the top influential women in DevOps.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights