Interop: Weaving New Fabrics

I sat down and said, with a grin, "Infiniband is dead." Asaf Somekh VP of marketing and Christy Lynch, director of corporate communications for Voltaire, took the shot well, but my comment framed the conversation in a way that was useful. Over the last few days at Interop, the differentiator--the long term differentiator--in Ethernet switching is not going to be speeds and feeds. Network performance will always improve, sometimes in baby steps, sometimes in leaps. The differentiator is services,

Mike Fratto

April 29, 2010

4 Min Read
Network Computing logo

I sat down and said, with a grin, "Infiniband is dead." Asaf Somekh VP of marketing and Christy Lynch, director of corporate communications for Voltaire, took the shot well, but my comment framed the conversation in a way that was useful. Over the last few days at Interop, the differentiator--the long term differentiator--in Ethernet switching is not going to be speeds and feeds. Network performance will always improve, sometimes in baby steps, sometimes in leaps. The differentiator is services, which aren't cut from the same cloth.

The term "fabric" is thrown around a lot in product marketing. The term sounds cool and is sufficiently vague to cover a multitude of meanings. I have been asking vendors what they think a fabric is. Somekh defined fabric as the hardware--switches and services like QoS that the network provides in delivering frames from node to node. It's the fabric, he says, that Voltaire has 10+ years experience in delivering with Infiniband. Data center Bridging Ethernet has the same set of requirements as Infiniband: lossless, reliable, redundant and fast connectivity in a different Layer 2 protocol.

But those requirements include new problems. With Ethernet, when a network port gets congested, the accepted practice is to drop frames and let the upper-layer protocols perform the recovery. It works well enough. But in lossless Ethernet, frames can't be dropped, which can actually lead to a worsening condition. Priority Pause is developed so that the congested receiver can tell the adjacent Layer 2 device to stop sending frames until the congestion is relieved, which should provide temporary relief. Of course, pausing transmission leads to increased latency between nodes. Which is worse, dropped frames or delayed transmission?  I suppose the action that takes longer to recover from.

Priority pause, however, has intelligence beyond simply telling an adjacent Layer 2 device to stop sending. What we want is to pause low-priority traffic and favor high-priority traffic, and that implies tagging traffic with priority marks. In addition, traffic can be aggregated from many switch ports onto a single output port, and when it receives a priority pause, it has to ferret out the origin of the flow originated and make a determination about how to proceed. Should it start to queue traffic or should it signal a priority pause to its adjacent neighbor? All of this happens under the covers, but how it happens, the decision-making process, will impact how effectively priority pause works.

Voltaire recently announced its Unified Fabric Manager for 10Gb Ethernet  which, in part, provides intelligent path-management so that congestion can be handled before it becomes a problem. Equally important, UFM can also detect potential problems and provide advice like an expert system to IT on how to better connect nodes to maximize thoughput and minimize delay. UFM can do it across vendor product lines that include Blade Networks and HP switches. It's an interesting message that I have yet to hear from other switch vendors. I don't claim to know if UFM works as the vendor claims, or if it can scale. I don't know enough about the product to make that assessment, but the kinds of control and analysis they are claiming is compelling. Voltaire has a demo at Interop that I will check out.Brocade is another vendor whose announcement on their Application Resource Broker raises the stakes for automating virtual machine environments. Application Resource Broker is a VMware vCenter plug-in that automates, in concert with Brocades ADX application delivery devices, the provisioning and de-provisioning of virtual machines in a VMware environment.

The idea is simple. When you deploy an application in vCenter, you create the base images of the application and create a number of clones that essentially do nothing until needed. These are put into a resource pool. As demand increases, the ADX signals the broker to add new VM and the required dependencies to meet the new load. Once the new capacity is added, it gets added to the ADX pool and starts to handle requests. As demand drops, virtual servers can be spun down. The decision to spin up and down as adjustable thresholds so that servers don't get spun and down rapidly based on momentary spikes.

Last year Citrix announced similar integration with XEN, their Netscalar ADC and Workflow Studio. Brocade broker or Citrix's studio can be used in a very targeted fashion so you can start small and keep a person in the loop and grow and automate if and when the technology and processes prove themselves reliable.

This is what you'll be dealing with in the future. The network will run itself and rather than provisioning ports and adding servers, you will be developing port profiles and applying them to VMs, which in turn get applied and removed from physical and virtual ports as needed. An agile data center can't be managed manually. It is hard giving up that control? You bet it is, and no one is suggesting that you flip the switch data center-wide today. But you can start piloting these types of technologies if they are available from your preferred vendor and start planning on putting them into production. You will be glad you started early.

Read more about:

2010

About the Author(s)

Mike Fratto

Former Network Computing Editor

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights