Virtual And Converged I/O At Interop
Since I'm scheduled to speak about I/O virtualization here at Interop on Wed, I spent Tuesday morning tracking down new IOV and converged networking stories and products. In no small part so I could avoid looking like an idiot by describing last month's technology as cutting edge while someone from the audience shouted that a new vendor on the floor was had even better vaporware.
April 29, 2010
Since I'm scheduled to speak about I/O virtualization here at Interop on Wed, I spent Tuesday morning tracking down new IOV and converged networking stories and products. In no small part so I could avoid looking like an idiot by describing last month's technology as cutting edge while someone from the audience shouted that a new vendor on the floor was had even better vaporware.
The first cool piece of IOV kit I found was from Aprius. The last time we saw them they were using PCIe extension to allow multiple servers to share I/O cards in an external chassis. Their new gear has junked PCIe cables and encapsulates PCIe data across 10 gig Ethernet. Their first generation of cards are dedicated to connections between servers and the external PCIe chassis but they're promising to support normal network traffic and what they pointedly a voiced calling PCIeoE (PCIe over Ethernet) while still keeping the cost for a two channel card down to around 250 dollars.
Using 10GbE has several advantages for Aprius: they can use merchant Ethernet switch to silicon map the 32 Ethernet ports in the I/O chassis to their PCIe bridges. Users can also use standard 10gig switches to connect more servers to the chassis, and once they support network traffic across the same channel as PCIeoE, the eight slots in the IOV chassis can all be dedicated to storage and specialized IO cards like SSL accelerators.
Now we just need a vendor like Fusion-IO or LSI to support SRIOV on their PCIe flash cards so we can assign slices of the flash memory to servers in the rack. Then we can assign flash LUNs to servers and share a flash card. Ultimately we could assign LUNs to virtual machines and have them available as VMs move from host to host. The other interesting converged networking stories were from Mellanox, who we usually think of as an infiniBand and HPC networking vendor. They've been peddling Infiniband to Ethernet and Fibre Channel bridges as a converged networking solution for years but are now making a move that could really shake up the infantile FCoE market.
I've ranted before, and will again, about how we won't see the economic benefits FCoE promises until Ethernet vendors that aren't already part of the Fibre Channel micro-economy of Brocade, Qlogic, Cisco and Emulex enter the FCoE market. In a move that may help move us to that Promised Land, Mellanox not only added FCoE support to their Bridgex chip but is also peddling a $40,000 developer kit Ethernet vendors can use to add FCoE functionality to their switches. This isn't just support for per priority pause and the rest of the DCB/CEE/DCE lossless congestion management extensions, but real FCoE including the all important Fibre channel forwarder. If Mellanox can interest Extreme, Force10, Juniper, Arista, Avaya and/or Arista to include FCoE support in their switches, we may see real competition in the FCoE market and that would be a good thing.
About the Author
You May Also Like