Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Intel's VMDq On ESX = Broad Market Goodness

I missed Intel's 10GbE and IOV news. Read on for why you shouldn't make the same mistake.
Intel has leveraged its I/0 acceleration technology in VMs since 2006; the VMworld demonstration of Virtual Machine Device queues (VMDq) gives its 10-Gb cards better alignment with virtualized switching inside the ESX host. Aligning queues yields lower latency and better real-world performance. Intel also get bragging rights for first 10-Gigabit Ethernet (10GbE) iSCSI support on ESX.

I met with Shefali Chini and Steven Schultz from Intel's LAN access Division to discuss VMDq and all things 10GbE. The Intel team is working with VM vendors to improve performance by dedicating virtual I/O pathways based on VM guest requirements.

The Intel cards work much of this magic in hardware, where most competitors rely on software for prioritization and queuing of packets. Offloading packet sorting to the NIC yields a performance boost for throughput versus host-based VMM queuing; ESX host CPUs also get a break. While VMware's NetQueue takes full advantage of Intel's VMDq, VirtualIron and Citrix/Xen hosts are still pending support. Both can run Intel's 1 Gb and 10 Gb NICs just fine, but they won't get the performance boost until early next year. Intel is working with the Xen community to integrate VMDq functionality with a 1Q '09 target date. (Yes, it is working with MS on Hyper-V, too...)

Performance numbers promise good things; Intel's benchmarks on an 8-way host running 8 VMs demoed a throughput increase from 4 Gbps to 9.2 Gbps with VMDq enabled. Numbers go up to 9.5 Gbps by tweaking packet size. While I always take vendor benchmarks with a grain of salt, real world numbers will likely see a dramatic boost on I/O-congested ESX servers.

Why all the fuss around IOV at VMworld? The answer is simple: we're all pushing against the bottleneck of 1Gb NICs for multiguest boxes. Necessity has spawned a variety of solutions, but most are either Band-Aids or greatly increase complexity of management... dedicated bridged cards per VM, parallel storage architectures using dedicated NICs and/or HBAs, performance management tools, etc.

  • 1