Dealing With VMware's I/O Challenges
One of the key themes at VMworld this week is dealing with the I/O challenges that a physical host loaded up with a dozen or more virtual machines places on the storage and the storage infrastructure. This is caused by consolidating hundreds of I/O friendly stand alone systems into a few dozen hosts. While virtualization reduces the number of physical servers, it now makes every server an I/O nightmare.
September 1, 2010
One of the key themes at VMworld this week is dealing with the I/O challenges that a physical host loaded up with a dozen or more virtual machines places on the storage and the storage infrastructure. This is caused by consolidating hundreds of I/O friendly stand alone systems into a few dozen hosts. While virtualization reduces the number of physical servers, it now makes every server an I/O nightmare.
Dealing with these I/O challenges can be addressed at several layers in the virtual environment. One layer is the infrastructure itself. The obvious suggestion here is to just make it faster. Companies proposing 10GbE and 8GB Fibre cards and switches are in full force at the show. Those cards are also getting smarter with the ability to sub-divide or prioritize their bandwidth on an as-needed basis to specific virtual machines.
Also gaining in popularity is I/O Virtualization (IOV). As we discussed in our recent article "Using Infrastructure Bursting To Handle Virtual Machine Peaks," IOV provides the ability to shift I/O resources as needed beyond the virtual machines on a single host and provide that capability across physical hosts. While IOV is sometimes looked at as a cost savings mechanism by sharing bandwidth across multiple physical hosts, it also provides data center flexibility. This allows you to virtually move bandwidth as needed between physical servers without having to touch those servers.
The second area that needs to be contended with all is the storage system itself and there are two concerns here. First, how fast can the storage mechanisms--the disk or solid state storage--respond to the I/O demand? Second, how much of the I/O can the storage controller handle? This is an area where a lot of confusion can be caused by walking the trade show floor. Adding solid state storage to an array does not solve all your problems.
There are four questions to ask as you look for faster storage to address your I/O challenges. First, are my physical hosts generating enough I/O to justify a move to solid state or a faster storage mechanism? Thanks to virtualization, it's more likely that you can, but you need to be sure.
Second, can my infrastructure transport that data fast enough to put pressure on the storage? See the above discussion on infrastructure I/O, but this is not limited to having an 8GB FC or 10GbE environment. If you have enough 4GB FC or even 1GbE connections, it can put pressure on the storage.
Third, can my storage controller/NAS head support the I/O rates that I am transferring? This may be more critical than the underlying storage itself. If the controller that is receiving all of this data can't process it quickly enough, it does not matter how fast the underlying storage is.
The final question is when all of the above questions are answered "yes," how much and what type of storage should I add to my storage system? Until you can move that to and through the storage system, worrying about SSD or 15K SAS or anything else is a waste of time. You can either address all these components individually or all at once by improving network bandwidth, storage processing capabilities and storage device speed all at once in a single system.
Performance problems are going to be the new reality in server virtualization. As servers are consolidated, so is the performance demand. Understanding how to deal with these challenges is a critical component in increasing VM density and driving even more cost out of the data center.
Read more about:
2010About the Author
You May Also Like