Amplidata Builds 'Unbreakable' Storage

While I stand by my position that we put too much of the responsibility for keeping our data safe for the long term on storage systems, as I wrote in Long Term Retention:It's More Than Media, I also believe that you bet on different horses for different courses. Startup Amplidata's new AmpliStor system has most of the features on my wish list for storing large data objects like medical images or rich media.

Howard Marks

April 15, 2011

3 Min Read
Network Computing logo

While I stand by my position that we put too much of the responsibility for keeping our data safe for the long term on storage systems, as I wrote in Long Term Retention:It's More Than Media, I also believe that you bet on different horses for different courses. Startup Amplidata's new AmpliStor system has most of the features on my wish list for storing large data objects like medical images or rich media.

AmpliStor is a scale-out object store based on the redundant array of inexpensive nodes (RAIN) model. Applications use a RESTful or Python API to connect to controller nodes. The controller nodes are Gigabit Ethernet connected to their associated storage nodes. The system can theoretically support thousands of nodes.

Where most RAIN systems use RAID and/or object replication for data protection, AmpliStor uses a unique set of advanced erasure codes that Amplidata calls BitSpread. Like Reed-Solomon codes, BitSpread provides a much higher level of data integrity than more conventional RAID systems. BitSpread implements erasure codes on a per-object basis. As each object is stored, it applies the forward error correction math, breaks the data into chunks and distributes the chunks across the drives in the storage nodes in the cluster.

Other erasure code-based systems, like NEC's Hydrastor, allow you to specify the reliability level or the number of blocks that can be lost while the data remains accessible. The AmpliStor system also allows you to specify number of chunks each object is stored in and, therefore, how broadly the system will spread your data.

Unlike parity systems, every data block includes both data and ECC information so the AmpliStor controller node can assemble a data object when it has retrieved the minimum number blocks needed to reassemble the data. If you select, say, 16 blocks and a reliability level of 4, the AmpliStor system will assemble objects when it's retrieved 12 blocks. For latency-tolerant applications, you could even specify 33 blocks with a reliability level of 13 and put 11 storage nodes in each of three data centers. All your data would remain protected, even in the event of a data center failure, and a with just over one-third overhead where a more typical object replication system would need three times as much storage as data to cover similar failures.While I love the whole idea of scale-out, most RAIN systems are power-hungry. In most RAIN systems, a storage node is more or less a standard 2U dual Xeon server with eight SATA drives. All that compute power means the typical storage node draws somewhere between 205 and 400 watts.

Amplidata has designed its system so the compute power can be concentrated in a small number of controller nodes. The more common storage nodes are based on low-power Atom processors. Each 10-drive storage node draws under 140 watts when active for total power consumption of under 7 watts per terabyte. Amplidata claims that, even with the low power processor, storage nodes are I/O-, not CPU-, bound. Next, Amplidata needs to implement storage node shutdown so inactive systems draw just the 1 to 2 watts needed to keep the IPMI controller running to receive a power-up command.

List price for controller nodes is $8,750, and storage nodes sell for $15,000. A three-controller 10 storage node starter system would provide 160TBytes of highly available storage for about $1.10 a gigabyte. (Note: I had the controller and storage node prices reversed originally)

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights