Scale-Out Storage: How Does It Work?

Howard Marks explains modern scale-out storage systems and how they handle networking.

Howard Marks

June 28, 2016

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Until a few years ago, the storage market was dominated by two system architectures. Monolithic arrays served the high end of the performance and reliability spectrum while dual-controller “modular” arrays fed the larger mid-market. The few scale-out solutions were relegated to specialty uses like high-performance computing, archiving or the large file world of media and entertainment.

The rise of software-defined storage has led to a proliferation of scale-out solutions and scale-out architectures. Today, users can buy scale-out systems for just about any storage use case from all-flash like XtremIO and SolidFire to massively scalable object stores. Of course, there's also integrated scale-out storage and compute from a myriad of hyperconverged suppliers. Scale-out’s even becoming the way to go in the backup world with integrated backup/storage appliances from Cohesity and Rubrik.

This wide variety of scale-out products has relegated the old-school high-end array to the corner – and admittedly profitable -- use cases that require extreme levels of reliability, and/or connectivity. The relative simplicity of a dual-controller array will have a place in the storage market as both solutions for applications with more modest scale requirements, because distributed systems are hard, kids. They’re also useful as larger, more resilient building blocks with which to scale-out.

Shared-nothing cluster

When most storage folks hear the term scale-out, their first thought is of a shared-nothing scale-out cluster. Within a shared-nothing cluster, each node -- almost always, an x86 server -- has exclusive access to some set of persistent storage. Nodes in shared nothing clusters don’t need fancy storage features like shared SAS backplanes, so any server, even a virtual one, can be a node in a shared nothing cluster. Early scale-out players called this shared-nothing architecture redundant array of independent nodes (RAIN), but that term has fallen by the wayside.

The problem with shared nothing clusters is that those commodity x86 servers are inherently unreliable devices. Sure, they have dual-power supplies, but an x86 server still represents a swarm of single points of failure. To provide resiliency, shared nothing clusters must either replicate or erasure code data across multiple nodes in the cluster.

scale-out storage concept

scale-out storage.jpg

The result is that a shared-nothing cluster is generally media inefficient. Replication requires twice as much media as data to provide any resilience and three times as much to ensure operations after both a controller and a device failure. Add in that a shared nothing cluster should always reserve enough space to rebuild its data resiliency scheme in the event of a node failure, and an N node shared-nothing cluster with three-way replication will only deliver (N-1)/3 times the capacity of each node.

Erasure coding can be much more efficient, bringing the capacity of a cluster up to (N-1)/1.2. But spreading data across a large number of nodes requires a larger cluster, with many solutions requiring six or more nodes to implement and rebuild the double-parity scheme required to survive a rebuild on large disk drives with multiple device failures. Erasure coding also has write-latency implications as the slowest node in the cluster for each write defines the application’s write latency.

Scale-out storage and the network

By spreading the functions of a storage array across many independent -- or more accurately, interdependent -- nodes, scale-out storage systems are inherently network dependent. The scale-out system has to not only support a SAN- or NAS-like interface to the compute load, but also has to use the network to tie all those nodes together.

The designer of a scale-out storage system has two primary network problems. The first is moving data between nodes for data protection in order to rebuild after a failure and to balance the system as nodes are added to the cluster. The second is how to deal with requests to node A for data that’s stored on node D.

Most early scale-out systems used a dedicated back-end network to interconnect the nodes. Using a dedicated back-end network, and providing any switches that network required, freed storage suppliers from the burden of qualifying and supporting any network gear their customers used. More significantly, it let them use a low-latency interconnect like InfiniBand on the back end while providing IP storage over standard Ethernet and storage protocols to the hosts.

EMC XtremIO even manages to use its InfiniBand back end to provide Fibre Channel on a scale-out system. While IP systems can redirect a request to the node that holds the data requested by a host. Fibre Channel-attached hosts have to get a response from the same port they made their request on. An XtremIO node can fetch the requested data from another node in the cluster and reply in a reasonable time because of Infiniband’s low latency.

While dedicated back-end networks made a lot of sense in the days of 1 Gbps or slower Ethernet on the front end, today’s 10 Gbps networks provide plenty of low-latency bandwidth for both host access and node-to-node traffic. 

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights