HP Opens Age Of Converged Networks
The week's announcements from HP's annual Technology Forum brought the new world of converged data and storage networks into the mainstream. HP's new blades and Proliant servers have Emulex silicon providing 10Gbps Ethernet and FCoE as standard equipment on the motherboard and in optional mezzanine cards. Since Qlogic supplies the cool new Virtual Connect 10Gb/24 port switch module both companies get to claim a design win.
June 25, 2010
The week's announcements from HP's annual Technology Forum brought the new world of converged data and storage networks into the mainstream. HP's new blades and Proliant servers have Emulex silicon providing 10Gbps Ethernet and FCoE as standard equipment on the motherboard and in optional mezzanine cards. Since Qlogic supplies the cool new Virtual Connect 10Gb/24 port switch module both companies get to claim a design win.
The Virtual Connect 10Gb/24 port switch module is cool because it only aggregates Ethernet traffic from the blades upstream to the core but is an FCoE switch in its own right. Even cooler Qlogic's Bullet ASIC allows four of the eight upstream ports have flex personality ports that can be either Ethernet or Fibre Channel on command. User organizations can dip their toes in the FCoE waters knowing they can add FC modules to the blade enclosure and use all the ports for Ethernet if things don't work out.
All these developments, with the exception of the Fibre Channel/10Gbps Ethernet dual personality ports, were expected. In no small part HP is now proving true the comment I made when Cisco's UCS was introduced that UCS was next year's blade servers this year. HP's move to put 10Gbps and FCoE on the motherboard puts converged networking firmly in the mainstream. It's now up to Dell, IBM and the rest (SuperMicro, NEC, Fujitsu) to get on the bandwagon with 10Gbps LOM.
While this is a big win for Emulex, displacing Broadcom and Intel which have ruled the LOM market for years. The current version of this stuff only supports three virtual Ethernet cards (vNICs) and one virtual storage adapter (vHBA) per Ethernet channel, so a typical server with dual channel LOM has six vNICs and two vHBAs. While this is right at VMware's best practice recommendation for today's virtualization hosts I can see it as being a bit constraining especially as VMDirectPath catches on.
I can't help but compare this new HP config to a UCS chassis that holds half as many blades each of which has just one dual port CNA. An HP chassis with two 10Gig/FC modules switches between blades, where the UCS sends all data upstream, and still has more uplinks. Plus, you can avoid the top of row switch and connect eight ports to end of row Ethernet and FC switches.The new servers are pretty cool too. Dear Santa Packard: All I want for Hanukkah is a DL980 w/64 cores and 2TB of memory. I'll figure out how to pay the electric bill.
About the Author
You May Also Like