Virtualizing I/O

Virtualization, whether it be server, storage, I/O or memory, typically becomes interesting to data center professionals when there is an excess of the resource being virtualized. Server virtualization for example is being made possible because there is in most cases plenty of compute resource. The next resource we are going to have plenty of is I/O bandwidth and virtualizing I/O ...

George Crump

October 1, 2009

2 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Virtualization, whether it be server, storage, I/O or memory, typically becomes interesting to data center professionals when there is an excess of the resource being virtualized. Server virtualization for example is being made possible because there is in most cases plenty of compute resource. The next resource we are going to have plenty of is I/O bandwidth and virtualizing I/O may be the next big initiative in the data center. 

I/O bandwidth is starting to become plentiful. Even today's 4GB fibre channel storage I/O bandwidth is plenty for many data centers and those that need more bandwidth have an 8GB option becoming available quickly with 16GB on the horizon. For IP 10GB is becoming a common upgrade path and the converged FCoE will start out at 10GB. The point is that for many workloads this is more than enough bandwidth, even in virtualized server environments. 

Companies like Aprius and VirtenSys are delivering I/O virtualization technology that looks to share the excess bandwidth across multiple physical and/or virtual machines. A typical implementation at this point seems to leverage a top of rack appliance or switch that you insert FCoE, 10GBE, 8GBFC or other PCI-E based cards into. Then a redundant pair of cables is run to each server in the rack, but this pair allows access to all of the I/O cards in the I/O virtualization switch on a shared or dedicated basis. In each server you install a relatively simple card that reminds me of a PCI-E extension card. I/O virtualization has the potential to offer the same cost saving benefits to I/O that server virtualization brought to compute.

A place to start is with redundant storage (HBAs) and network adapter (NICs). We all put them in our servers in case the primary adapter fails. While I have seen adapters fail it is certainly not an everyday event. With I/O virtualization you could place a single card in all the servers in a rack and then place a single redundant card in the I/O virtualization switch and have that be shared across all the servers; a global spare for each server. This allows you to test I/O virtualization without jumping in with both feet. Imagine saving a minimum of two adapters per server per rack. The cost savings on just testing I/O virtualization would be substantial. Imagine when you get confident in the technology and can roll it out full scale. 

For some environments and some servers the thought of sharing I/O bandwidth is impractical, these systems need all the bandwidth they can get. For other, maybe most, servers in the environment we are getting to the point as we are with the compute resource, we can give them more bandwidth than they will likely ever need and it makes sense to virtualize and share that bandwidth.

Read more about:

2009
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights