iSCSI & VMware

I've taught guys with no storage background to manage iSCSI SANs in two hours; those that have Fibre Channel SANs have VARs do most of the management

Howard Marks

February 7, 2009

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

11:30 AM -- While my esteemed fellow Byte and Switch blogger George Crump sees NFS as the storage protocol of choice for VMware, I have to take a contrary position and champion the iSCSI SAN. iSCSI SANs are cost effective, generally easy to configure and supported by all the features of both VMware and Windows server, which is the most common VMware guest. All that, and it runs on IP to boot.

I know all you grizzled storage geeks out there are going apoplectic at the mere thought of iSCSI. From where you sit, iSCSI is storage's red headed stepbrother. You see it as a little simple (implying not ease of use but slow on the uptake), slow, and not really suitable for big boys. Even the fact that adding a server to a Fibre Channel SAN costs three or more times what it costs for iSCSI or NFS reinforces the enterprise elites' position that iSCSI is for SMBs.

Well, the truth is, a couple of 1-Gbit/s Ethernet channels is more than the vast majority of virtual server hosts need when running applications, Web tests, development, and assorted other servers. The best published benchmarks from NetApp (found here) and VMware (found here) at VMWorld Europe 2008 show that with typical workloads, iSCSI is less than 10 percent slower than FC, and up to 10 percent faster than NFS, while using just 20 percent more CPU than NFS.

NFS advocates like George argue that NFS data stores are, by definition, thin provisioned. After all, each VMDK and snapshot only takes up space in the file system when the machine, or snapshot, is created. Of course, that is a feature of any file system. Even without thin provisioning support in the storage hardware, which is becoming pretty common, creating a VMFS LUN that supports multiple VMs accomplishes the same thing. NFS-hosted VMDKs are created as sparse files, which can also save space. But if you use Storage vMotion or clone from templates, which is how I create most of my VMs, they expand, eliminating that advantage.

In his VM & iSCSI entry, George also makes some of the standard storage-guy arguments against iSCSI. To paraphrase, "FC's gotten easier, I can manage an FC SAN just as easily as I can an iSCSI SAN" and "When iSCSI scales it gets complicated with HBAs and VLANs and such." While there's some truth in both, they're strictly from the storage guys' point of view.In the midsized organizations (500 to 5,000 total users, 50 to 500 servers) that are my clients, there are few dedicated storage admins, and CIFS is the dominant file access protocol, frequently to Windows servers. They all have network guys that know Ethernet, IP, and VLANs. So the question isn't how hard it is, once they know how, but what technology leverages the knowledge they have already. I've taught network and server guys with no storage background to manage EqualLogic SANs in two hours. Most of those that have FC SANs have their VARs do most of the management.

iSCSI also supports RDM and, therefore, Microsoft clustering. VMware HA is great, but organizations that have been using MSCS for years don't want to throw out the solutions that work for them as they virtualize their servers. And VMware HA doesnt provide the granularity that MSCS can.

Finally, new technologies like VMware SRM are supported on block protocols like iSCSI before NFS. While I’ve spoken to several users that have successfully tested SRM on NFS, it's not currently supported by VMware.

Now that's not to say iSCSI is the perfect storage protocol for VMware. In addition to being limited to 1 Gbit/s, VMware's software initiator doesn't yet support multiple connections per session, and boot from SAN requires Qlogic HBAs. It also doesn't support the TCP and iSCSI offload provided by the Broadcom Ethernet chips on the motherboards of most servers, increasing CPU utilization. I'm hopeful ESX 4 will address these issues.

Next time: Getting the most from your VMware iSCSI systems.— Howard Marks is chief scientist at Networks Are Our Lives Inc., a Hoboken, N.J.-based consultancy where he's been beating storage network systems into submission and writing about it in computer magazines since 1987. He currently writes for InformationWeek, which is published by the same company as Byte and Switch.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights