VM & iSCSI
In a VMware world, is iSCSI the odd man out?
December 10, 2008
1:30 PM -- We've been doing a lot of research and testing around VMware storage protocols lately, and I am coming to the conclusion that among the big three -- Fibre Channel, iSCSI, and NAS (NFS) -- iSCSI is the odd man out. I'm going to spend the next two or three blogs sharing my thoughts on what we are seeing, and I'm looking forward to some feedback.
First, I know that getting into protocol discussions is about as safe as getting into de-duplication discussions, but let me state my case. If you are making a selection today, your choices are typically going to be 8-Gbyte FC (maybe 4-Gbyte), 1-Gbyte iSCSI, and 1-Gbit/s or possibly 10-Gbit/s Ethernet with NFS.
When it comes to building a VMware storage infrastructure, the decision points often seem to come down to performance, cost, and ease of use. Sure, there are other issues like security and reliability. But most customers focus on the former three. There is also the comfort factor -- you are most likely to use what you use now or what your peers use.
When it comes to straight performance, most people will concede that Fibre has the performance advantage from a raw numbers standpoint, and, if your hosts and associated workloads can actually take advantage that type of performance, then Fibre is the most likely candidate. For some customers the performance capabilities of iSCSI and NFS are acceptable, especially initially.
If your I/O performance can be easily sustained by either iSCSI or NFS, then you are looking at ease of use and costs, as the two protocols are virtually tied in storage I/O performance. For many, iSCSI used to be the "go to" technology for ease of use. The basic concept was that it ran over IP so it had to be easier. I've been working with iSCSI since 2002, and it has always been pretty straightforward, especially if the customer can get away with using software initiators, and the performance of a standard Ethernet card is acceptable to them.Where iSCSI begins to have challenges is when you need to extend it. For example, in an ESX environment you may want to extend it by adding an iSCSI HBA to offload IP overhead or to boot the ESX server from the SAN. When it comes to performance tuning, you may want to add multiple HBAs, set up VLANs, and take other tuning steps. None of this is impossible, but very quickly you get into array of architecture planning that you were trying to avoid by staying away from a Fibre Channel infrastructure.
In the meantime, the Fibre Channel community has focused on making its technology easier to use. While it will vary based on your background, many -- myself included -- find that Fibre is just about as easy to set up now as iSCSI, especially as you try to scale that protocol out. You could also make the case that you will hit the performance wall sooner with iSCSI than you will with Fibre.
With either protocol, you're dealing with block-based access, so that means either VMFS or RDMs. That's not a big challenge, depending on your background, but certainly an area that trips some people up. In the past, the only choice was block storage, so whether it was a challenge or not there was no choice, so you had to deal with it. NFS changes that by offering the ability to deal with file-based access to your VMware storage.
In my next post I'll explore using an NFS infrastructure with VMware. NFS brings the cost savings aspects of IP with the simplicity of dealing with a file system as opposed to block storage. But does it offer the performance you need in these environments?
George Crump is founder of Storage Switzerland , which provides strategic consulting and analysis to storage users, suppliers, and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.6668
You May Also Like