Is RAID Fading Into The Sunset?

With the arrival of faster networks and SSDs, RAID can no longer keep up. Data protection alternatives such as replication and erasure codes are gaining traction.

Jim O'Reilly

April 15, 2014

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Many of us IT pros have been using RAID to protect our data for the entirety of our professional lifetimes. RAID has stood the pressures of technical evolution well, in part because the fundamentals of disk drives and storage didn’t change much in that time.

However, much larger drives, faster networks and SSD storage have now combined to create a fork in the road, and alternatives are needed. The first crack in the edifice was the realization that if a drive failed in an array of multi-terabyte drives, the rebuild time was so long that the possibility of a second, terminal failure was too high. This led to a much more complex RAID 6, creating two parity records for each stripe.

RAID 6 has a major drawback, however: requiring a lot of compute power to generate parity. An alternative, RAID 50 uses a single parity, but replicates the data on another set of disks, which uses too much space.

The advent of solid-state disk made both of these options untenable. The issue is that SSDs are somewhere between fast and light-speed compared to hard drives, and those parity calculations became very hard to achieve. In addition, the cost of SSD was so high that often a configuration wouldn’t have the minimum of six drives required to make RAID 5 feasible. Many just needed one or two drives to act as caches and tier 0 storage for critical files.

The result was that SSDs are often replicated or mirrored (RAID 1). These two approaches are very similar, with a second copy of the data on another drive, but replication goes a bit further and stores the data on a separate storage appliance, removing single points of failure.

The other “big event” in storage affecting RAID is the emergence of cloud services. The need to scale out put enormous pressure on storage approaches, and the idea of hard disk drives (HDD) using replication made economic sense. The trade-off is that cloud service providers can buy HDD at the lowest OEM prices, making it cheaper to add drives rather than high-speed RAID heads to protect data. The CSPs also addressed a pressing need for data dispersion for disaster recovery by having a third replica geographically distant from the other two.

The CSP model makes sense with HDDs costing around $60 for a 2TB drives. The cost of a typical (proprietary) RAID head node pays for a lot of drives! Replication also has the benefit of not slowing down when a drive is lost, since data doesn’t need to be recreated from parity, and it also maintains integrity if a second drive fails, since there are three copies.

[Read about a new standard that ramps up SSD performance with a radical new approach to storage I/O handling in "NVMe Poised To Revolutionize Solid-State Storage."]

Historically, replication has been tied to an object storage model, somewhat like a file server on steroids. This model uses its own access protocol, REST, to get to data across the network. Still, block I/O operations to update data are possible, and this need has even created universal storage appliances that can manage file, block and object access to the same object store. An example that's rapidly gaining popularity is the open source Linux storage application, Ceph.

Replication’s major drawback is the need for three or more full copies of data. Cleversafe has pioneered an extension of the RAID concept called erasure coding. This involves adding redundant information, somewhat like parity, to the data and then distributing it over multiple appliances. Typically 10 data blocks become 16 total blocks (10+6 coding) and the rule is that any 10 of these 16 blocks are sufficient to reconstruct the data.

However, erasure code calculation is compute-intensive, slowing both writes and reads, especially when blocks are missing. The number of drives involved tends to be high. This makes it useful for scale-out archival data, but problematic for SSDs in Tier 0 or 1. Likely this will remain an issue unless hardware assist logic becomes available.

With SSDs straining performance limits and cloud storage using very inexpensive drives to protect data, it looks like replication will take the lead from RAID, if it has not already done so. RAID arrays won’t disappear overnight, but faster object stores, open source enterprise-grade software and cheap drives all mean that the playing field is tilted towards universal storage boxes and the replication approach.

About the Author

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights