All-Flash Arrays: The VDI Solution?

The huge performance provided by all-flash arrays is a boon for virtual desktop infrastructure, but don't be fooled by unrealistic calculations.

Jim O'Reilly

April 17, 2014

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Flash technology is fast changing the profile of datacenter storage. From desktops booting in 10 seconds to server workloads that are big multiples of the hard-drive versions, SSD has finally brought storage into line with Moore’s Law.

After a slow start, array vendors are delivering SANs with SSD and HDD tiered to optimize cost/performance. We are seeing the important metric for storage moving from cost per terabyte to cost per IOPS -- in other words, recognition that SSD and HDD are not just flavors of drives, but are vastly different in what they can do in a given time.

But, as always, the evolution of a new technology has built up momentum, and hybrid arrays now compete with all-flash units. All-flash arrays boasting more than 1 million IOPS are on the market, and the numbers keep rising. Of course, the Zen of benchmarking has been applied to get such results, but these units are still really fast!

One use for this sort of performance is as a front-end cache or as a top-tier of storage for a SAN. Depending on the environment, this can be a big boost for a relatively small outlay, perhaps $50,000 to $100,000.

The stellar use case, though, is with virtual desktops. However, this requires a bit of scrutiny. EMC reports that a typical VDI instance uses just 25 to 40 IOPS on average, which would imply a single all-flash array could power 40,000 VDI instances. That’s enough for a huge enterprise. The cost would be very small compared to the racks of expensive hard-disk storage needed for the same job. Using EMC’s number, we’d need 10,000 hard-drives for those 40,000 instances.

The problem is that EMC's calculation is wrong on two points. The issue with virtual desktops is the so-called “boot storm” when everyone fires up their computers at 8 A.M. As any PC user knows, a hard drive running flat out at 150 IOPS takes a couple of minutes to complete boot. So, to match the standard PC expectation, we can only boot one fourth of the load, or 10K VDIs.

The second issue may cause a couple of snickers. Replacing the boot drive in a PC with an SSD has already sped up the boot process. Most such PCs see a boot of around 10 seconds, which is a factor of about 10 times faster than before. THIS is the target that the all-flash arrays will really be measured against, and the humor is that it’s a result of the SSD’s own success!

A quicker boot time will be important for VDI, especially with most users having instant-on experience with tablets and mobile phones. These are becoming the endpoint for the virtual desktops to be displayed, and a long period to boot up every time the desktop is accessed isn’t going to fly.

This brings those million IOPS into sharp perspective. We can boot roughly 1,000 VDIs with a million IOPS. Realistically, at this level of usage, the boot time is probably stretched out a little due to “reality versus benchmarks,” but we are in the right ballpark.

One question bound to come up is capacity. A single all-flash array containing 32 terabytes provides 30 gigabytes per VDI. With deduplication of the operating system and most apps, that’s a good deal of space for user data. In other words, capacity and dollar per terabyte aren't issues.

For now, the metrics to focus on are cost per IOPS and by extension, cost per VDI instance. At $100,000, that storage costs just $100 per VDI instance, which is a fraction of what a traditional HDD farm would be.

Remember too that it’s a good idea to have two all-flash arrays running as mirrored LUNs, to remove single points of failure, and that there is no restriction on how many can be put into a given SAN. Speeding up general computing, as opposed to VDI, will keep the forklifts away for a while, although the urge to replace those slow servers and get job times down even further may be overwhelming.

About the Author

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights