Wire-Once: Strategy or Pipedream?
With the rise of 10Gbps Ethernet and the converged data and storage network being talked about these days, we are hearing the siren's song of wire-once networking. While I love the idea of wire-once, I've been building networks long enough have heard this song before. Advocates of technologies from 10Base-T to ATM have all claimed you could wire once and then relax and live the simple life. As the saying goes, if it sounds too good to be true, it probably is.
July 12, 2010
With the rise of 10Gbps Ethernet and the converged data and storage network being talked about these days, we are hearing the siren's song of wire-once networking. While I love the idea of wire-once, I've been building networks long enough have heard this song before. Advocates of technologies from 10Base-T to ATM have all claimed you could wire once and then relax and live the simple life. As the saying goes, if it sounds too good to be true, it probably is.
My favorite wire-once failure story came back when 10Base-T was cutting edge technology, when I was consulting with a client. We ran an RFP process to choose a replacement for their existing mixture of coax Ethernet, both thick and thin, and IBM Token Ring. After a winner was selected, and our consulting engagement ended, and some braniac in facilities decided it would add value to their headquarters building to wire-once with multi-mode Fibre.
This decision raised the cost of the project significantly since Fibre optic hubs had about the same port density of 10Base-T hubs, and all the other parts from patch panels to NICs also cost extra. At the time, they thought it was a good investment. By the time they went to upgrade to switched fast Ethernet, PCs had 100Base-T ports on the motherboard and Fibre optic NICs weren't readily available.
So they bought hundreds of media converters that went under desks and got full of dust, cables kicked out of them, and so on. They sold the building before upgrading to Gb to the desktop, which wouldn't have worked on their old FDDI style multi-mode anyway.
While I've already started using 10Gbps as my default for server connections in new designs, I'm not convinced that current wiring systems will last the 10 years Cat5 has, or even for the lifetime of the blade chassis and top of rack switches we're buying this year. My first concern is that we haven't really standardized on the cables themselves. Assuming that Xenpak, X2, XFP and the like really are dead, we still have SFP+ twinax cables (in two variations active and passive) and 10GBase-T plus our choice of several Fibre optic solutions. Like my former clients, you could decide that OM3, or OM4, Fibre will work for today's and tomorrow's technologies, but that means spending $500-$2000 on optics and cable for each connection, where an SFP+ twinax cable costs $50-$100. Even if it costs you $100 to install a cable from server to your top of rack switch, in which case you might want to look at what you pay your data center guys, you have to rewire twice using copper can save real money. You can use that to put more memory in the servers.
I'm also concerned about bandwidth. 10Gbps seems like a lot now, but some of today's blade server architectures could run low on uplink bandwidth in a couple of years. By 2012, that half height blade will have 2-4x the compute power and memory of today's Westmere's and Magny-Cours. Put eight of those servers in a chassis with 8-10Gbps uplinks, and they may be able to overload it, especially in UCS environments where server to server traffic goes upstream to the next switch.
The data center network of tomorrow with virtual NICs and switches that recognize when a VM moves from host to host should need a lot less cable changes than today's do. I'm just not sure we're really talking about wiring once.
About the Author
You May Also Like