Why Hardware Still Counts In Networking

Software is revolutionizing network architecture, but hardware remains a critical part of the equation for data center scalability.

Ethan Banks

June 29, 2015

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

As a host of the networking podcast Packet Pushers, I receive lots of interesting e-mail. Listeners tell us how we’re doing, share their knowledge, and voice opinions. One opinion that’s come up lately I will describe as an aversion to hardware. In the minds of some, software is king; code is a networking cure-all that will take us into the future.  

Chris Wahl, a fellow writer and engineer, told me he's also heard this anti-hardware sentiment. “Did the bad ASIC hurt you?” he joked, as we tried to understand the software bias.

There is no doubt that much of the revolution in network architecture is coming from software. Great code is bringing useful ideas to life that are moving networking ahead. However, hardware still plays a critical role in networking. I'll explain why, but first, let's review the pro-software arguments. Here's how I understand them; feel free to counter my thinking in the comments section. 

  • x86 is fast enough. General purpose, x86-based CPUs are adequately fast for networking, fast enough to fill 10 Gbps or more these days, assuming efficient code.

  • APIs are catalysts. One software component can talk to another software component via an application programmatic interface (API). Therefore, APIs are the catalysts to a bright, software-defined tomorrow. Developers can use APIs to stitch modules together into a richly capable software fabric. This fabric will deliver networking features never before realized.

  • Love for code is the harbinger of change. We’re seeing an increasing number of open source projects such as OpenDaylight and ONOS as well as startup companies in the networking industry basing their value proposition on  software. It’s almost unfashionable to get excited about new metal.

  • The SDN paradigm is changing how networking is done. Around SDN, we see new ways of thinking about networking based around a controller that arbitrates between smart software and underlying hardware (among other things). A great deal of effort is going into abstraction layers that attempt to make the underlying hardware uninteresting. The ultimate expression of this could be white-box switching, where the silicon is taken for granted, and the software programming the white-box infrastructure brings the unique value.

  • SD-WAN is software’s poster child. As I continue to research the nascent SD-WAN market, I see it as the poster child for networking software. Powerful policy software rethinks traffic forwarding. That policy is distributed to software forwarders running on COTS x86 hardware or virtualized to run on a hypervisor. No custom ASICs required.

I have no arguments with any of these points, as they stand. Still, software needs to run on hardware and x86 presents a scaling limitation. There’s a reason data center switches aren’t based on an x86 architecture. Custom ASICs are required to do packet forwarding operations at line-rate across high-density Ethernet switches. This also explains why SD-WAN is doing so well as a pure software-on-commodity-hardware play: SD-WAN neither requires especially high throughput, nor operates at a high-port density.

The industry has not lost sight of hardware’s ongoing critical role in networking:

  • OpenFlow development has slowed, in part, to allow silicon manufacturers and standards writers to achieve parity. As OpenFlow currently stands, different operations result in different levels of performance, all depending on the silicon the operation is run against. Expect the next generations of chips and OpenFlow standards to present far fewer performance compromises than are experienced today.

  • Hardware ASICs are dedicated to a purpose, and do not share their resources with other processes. In the context of soft switching, a hypervisor vSwitch must share x86 resources with every other process running on the box. More network throughput means less CPU available for the rest of the system. Solutions that offload network processing to hardware like Netronome become key for scaling.

  • Service providers are looking to the silicon industry to facilitate NFV at massive scale. I've  had briefings recently with both Freescale and ARM executives, who have discussed L4-7 acceleration in silicon, targeting service provider needs as they retool.

  • Recent M&A activity in the semiconductor industry highlights an active market with high valuations consolidating into silicon behemoths with full product lines -- one stop shopping for their customers. This includes the mighty Broadcom, a name that has become synonymous with merchant silicon in the data center switching space.

So, yes, software is changing the face of networking. There is no question about that. But in order to work at the scale that the industry requires, hardware still matters. I do not see this symbiosis changing anytime soon.

About the Author

Ethan Banks

Senior Network ArchitectEthan Banks, CCIE #20655, is a hands-on networking practitioner who has designed, built and maintained networks for higher education, state government, financial institutions, and technology corporations. Ethan is also a host of the Packet Pushers Podcast. The technical program covers practical network design, as well as cutting edge topics like virtualization, OpenFlow, software defined networking, and overlay protocols. The podcast has more than one million unique downloads, and today reaches a global audience of more than 10,000 listeners. Also a writer, Ethan covers network engineering and the networking industry for a variety of IT publications and is editor for the independent community of bloggers at PacketPushers.net.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights