Thoughts On SSDs And RAID

Most storage admins, including this intrepid reporter, are pretty reactionary about their RAID level choices. Out of force-of-habit, we choose RAID-1 or 10 when performance is the primary goal and RAID-5 (or RAID-6 for the more enlightened), when we need space more than speed. I've been thinking that these rules-of-thumb may not apply when you have six or 60 SSDs, as opposed to spinning disks, to manage.

Howard Marks

May 17, 2010

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Most storage admins, including this intrepid reporter, are pretty reactionary about their RAID level choices. Out of force-of-habit, we choose RAID-1 or 10 when performance is the primary goal and RAID-5 (or RAID-6 for the more enlightened), when we need space more than speed. I've been thinking that these rules-of-thumb may not apply when you have six or 60 SSDs, as opposed to spinning disks, to manage.

A classic problem is the server admin who insists on a hardware RAID controller and RAID-5 for their server's boot volume. As recently as last year, I saw systems with 3-73GB drives while they used the SAN for their data volumes. Bloated as some may accuse Windows of being, it would fit comfortably on a cheaper, faster, simple, greener and less error-prone mirrored pair of 73GB drives. When I ask the admin why RAID-5 instead of RAID-1, I've never heard a better reason than "RAID-5 must be better than RAID- 1. It's a higher level," while most common answer is "That's what we've always done." I could, and probably will, expound upon that statement for several blog posts.

I must admit, I was slow to accept RAID-5. For years I had good experience with mirroring in NetWare, which had a pretty clever I/O scheduler for the slow disks of the era (Maxtor XT-1140). Then I had some really bad experiences with RAID-5 on early server RAID controllers and Windows NT on 200Mhz Pentium Pros. The basic logic behind using RAID-1 for random workloads comes from the fact that they can spread read requests across the pair of drives, but each write requires 2 I/O operations,, one to each drive.  In a hypothetical case, n+1 RAID-5 set reads, especially sequential reads, are spread across all the drives in the set. Small writes, however, require up to 2n IOPs, as the controller has to read the existing data to calculate the new parity for the stripe. On sequential writes that exceed the stripe size, it no longer needs to read the old data back to calculate parity and performs N+1 IOPs per request, where a mirrored system still does 2n.

At the most basic level, choosing between mirroring or spreading data and parity across multiple drives is deciding how to balance random I/O performance against capacity.  Spinning disks, even 15K RPM drives, are cheap on a $/GB basis, but IOPs are expensive.  RAID-1 maximizes the valuable IOPS at the cost of the affordable capacity. SSDs, on the other hand, provide cheap IOPs but expensive capacity.  

Once you've made the decision to invest big bucks on SSDs, it's tempting to go the maximum performance route and mirror them. That may be the best answer if your array vendor's gone the STEC ZEUSiops route (as Compellent, EMC, HDS and others have), since you can get away with only buying 2 or 4 SSDs. On the other hand, if your array vendor like Nimbus, Equallogic, 3PAR and Pillar uses a large number of less expensive SSDs RAID-6, or even RAID-5, may give you plenty of performance and enough capacity for you to use the new SSD volume to speed up more than just your most critical applications. When you have IOPS to burn trading IOPS for capacity might be the best bet.

Read more about:

2010

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights