8 Gb FC, Qlogic, HP, And VM I/O

I've spent the better part of the last week reconfiguring gear in the virtualization test lab, getting reacquainted with machine specs and idiosyncrasies. We have a variety of servers connected to a Dell, nee EqualLogic, iSCSI SAN. Half of our HP servers and one Xserve also have 2 Gb Fibre Channel HBAs, unused since we lost our FC SAN. Remember when those 2 Gb FC connections seemed zippy? 4 Gb HBAs have been on the market for a couple years, and a number of recent 8 Gb FC solutions are

Joe Hernick

March 10, 2008

2 Min Read
NetworkComputing logo in a gray background | NetworkComputing

I've spent the better part of the last week reconfiguring gear in the virtualization test lab, getting reacquainted with machine specs and idiosyncrasies. We have a variety of servers connected to a Dell, nee EqualLogic, iSCSI SAN. Half of our HP servers and one Xserve also have 2 Gb Fibre Channel HBAs, unused since we lost our FC SAN. Remember when those 2 Gb FC connections seemed zippy? 4 Gb HBAs have been on the market for a couple years, and a number of recent 8 Gb FC solutions are being touted as a remedy for I/O constraints in virtualized environments. So what's new in the FC world? QLogic makes end-to-end 8 Gbps Fibre Channel solutions for the high-end storage market. This morning the company announced a partnership with HP to deliver 8 Gb FC products, including an an all-in-one SAN connection kit, and 8 Gb FC switches under the HP brand. HP gets a solid 8 Gb FC solution and QLogic gets a new major distribution parter for product.

The new HBAs support all flavors of Windows including W2K8as well as ESX 3.5 and 3i. While drivers are out for RedHat, Xen-based virt users will need to wait for optimized drivers. The entry level kit from HP includes 4 adapters, optics, and cables for $8,199 list; 8 Gb HBAs, switches, and storage solutions aren't exactly on the low end of the market. Two neat points -- QLogic FC switches are stackable ... much like we've come to expect of Ethernet switches. Newer 8 Gb switches connect over 20 Gb FC, while QLogic's legacy 4 Gb switches stack via 10 Gb uplinks. (When did 4 Gb become legacy equipment?) The new 8 Gb adapters are also about as green as they can be; the HBAs recognize if they are plugged into a second-generation PCIe slot, clipping off four lanes and dropping power consumption by one watt versus a first-gen PCIe bus while maintaining full throughput. Think of it as cylinder deactivation; every little bit helps, and those one watt tics will add up over time in large sites with hundred of HBAs...

The move from physical to virtual has raised awareness around storage I/O bottlenecks. As the number of VMs on a host increases, the contention for access to off-server storage can make your 2 Gb HBAs or 1 Gb iSCSI connections seem mighty slow. Vendors are stepping up with faster flavors of SAN connections and hybrid solutions like Fibre Channel over Ethernet in an attempt to address market needs. I don't see any immediate winner solving the virtual I/O dilemma. Shops like Emulex and QLogic are shipping 8 Gb FC, Intel and others are pushing FCoE, and new "fully virtualized" I/O solutions will still be bridged back to FC or iSCSI SANs for the foreseeable future. The appetite for bandwidth never decreases.

Read more about:

2008
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights