What's Next With Flash?

What we need is some kind of automagic mechanism for identifying the hot data and migrating it to the Flash-enabled Tier 0

Howard Marks

March 26, 2009

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

8:55 AM -- As returns from the iSCSI camp now trickle in here to Solid-State Memory Election Central, a few candidates are emerging as front runners for the title of Conventional Wisdom of 2009 in the use of solid-state memory in the storage market. Clearly, the biggest winners are STEC, whose ZEUSIOPS FC and SAS SSDs are filling drive bays in IBM, EMC, SUN, and HDS arrays; and Fusion I/O, which is leading the raw performance pack with design wins from HP and IBM for server flash DAS.

Once flash vendors started packaging their products in disk-sized cans with disk interfaces, it became a matter of time until someone put them in an array. Today you can get a small number of very high performance (45K read IOPS / 16K write IOPS STEC) Flash drives (EMC, HDS, etc.) or a whole tray of lower performing -- but still 10 times faster than 15K drives with much lower latency -- Intel X25-E or Samsung drives (Pillar, EqualLogic, etc.). Either way, you create raidsets and LUNS (or whatever your vendor calls it. In whatever order they do it. Fat or thin it doesnt matter… Please excuse the short delay as I fall to the floor foaming at the mouth. I'm much better now...) and then have the application guys move the "hot" data to the new flash LUNs. Since these are disk drives to your array, you can use all the cool data protection features it has. The problem is you have to move data to a new LUN.

With array SSDs or PCI-E flash, the challenge is identifying the "hot" data that's accessed all the time. This can be 2 percent to 5 percent of a database, making flash cost effective even at 20x the price per GB of 15K RPM drives. A smart Oracle DBA or development group can move just the busiest parts of the 5-TB database to the new 250-GB flash LUN you just created for him.

Your Exchange Admin, on the other hand, can't break up an information store -- they're always one big file, and smaller organizations that mostly run canned applications may not be able to isolate the 2 percent to 3 percent of a database that's hot data and may be better served by the whole tray of cheaper SSDs that are only 10 times as fast as 15K drives.

What the rest of us need is some kind of automagic mechanism for identifying the hot data and migrating it to the Flash-enabled Tier 0. One solution for Linux and Unix servers is to use Symantec's Veritas File System, sold as part of its Storage Foundation product. VxFS supports file systems that can extend over multiple volumes with different performance profiles. An administrator can define profiles that automatically migrate files from disk to Flash based on their I/O temperature. Granularity is still at the file level, so using VxFS to host VMware instances can shift busy VMs to faster storage, but not just the Windows swap file of that VM, since that's part of the same .VMDK as the rest of the VM.Flash is going to make Compellent's automatic storage tiering or similar technology a standard feature of the disk arrays of 2015. By automatically identifying the hottest blocks in a logical volume and moving them to Flash, automatic storage tiering optimizes performance and minimizes cost, without manual intervention from the system or storage admin. While it was cool to move busy data blocks from 7,200 RPM to 15K RPM drives, there is only a 3 or 4:1 performance difference between the two tiers. Moving from 7200 RPM to Flash is more like a 30:1 boost, which means big performance payback for converting even a small amount of disk to Flash.

While we're looking in the crystal ball, when are RAID controller designers going to start using Flash for cache protection? Rather than having a battery the size of Officer Krupkie's flashlight to keep the DRAM alive for a week, it could use a smaller battery or ultra-capacitor to power-copy the dirty cache blocks to Flash.

InformationWeek Analytics has published an independent analysis of the challenges around enterprise storage. Download the report here (registration required).

— Howard Marks is chief scientist at Networks Are Our Lives Inc., a Hoboken, N.J.-based consultancy where he's been beating storage network systems into submission and writing about it in computer magazines since 1987. He currently writes for InformationWeek, which is published by the same company as Byte and Switch.

Read more about:

2009

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights