The storage market is changing fast. Here's some guidance for navigating it.
Enterprise data storage used to be an easy field. Keeping up meant just buying more drives from your RAID vendor. With all the new hardware and software today, this strategy no longer works. In fact, the radical changes in storage products impact not only storage buys, but ripple through to server choices and networking design.
This is actually a good news scenario. In data storage, we spent much of three decades with gradual drive capacity increases as the only real excitement. The result was a stagnation of choice, which made storage predictable and boring.
Today, the cloud and solid-state storage have revolutionized thinking and are driving much of the change happening today in the industry. The cloud brings low-cost storage-on-demand and simplified administration, while SSDs make server farms much faster and drastically reduce the number of servers required for a given job.
Storage software is changing rapidly, too. Ceph is the prime mover in open-source storage code, delivering a powerful object store with universal storage capability, providing all three mainstream storage modes (block-IO, NAS and SAN) in a single storage pool. Separately, there are storage management solutions for creating a single storage address space from NVDIMMs to the cloud, compression packages that typically shrink raw capacity needs by 5X, virtualization packages that turn server storage into a shared clustered pool, and tools to solve the “hybrid cloud dilemma” of where to place data for efficient and agile operations.
A single theme runs through all of this: Storage is getting cheaper and it’s time to reset our expectations. The traditional model of a one-stop shop at your neighborhood RAID vendor is giving way to a more savvy COTS buying model, where interchangeability of component elements is so good that integration risk is negligible. We are still not all the way home on the software side in this, but hardware is now like Legos, with the parts always fitting together. The rapid uptake of all-flash arrays has demonstrated just how easy COTS-based solutions come together.
The future of storage is “more, better, cheaper!” SSDs will reach capacities of 100 TB in late 2018, blowing away any hard-drive alternatives. Primary storage is transitioning to all-solid-state as we speak and “enterprise” hard drives are becoming obsolete. The tremendous performance of SSDs has also replaced the RAID array with the compact storage appliance. We aren’t stopping here, though. NVDIMM is bridging the gap between storage and main memory, while NVMe-over-Fabric solutions ensure that hyperconverged infrastructure will be a dominant approach in future data centers.
With all these changes, what storage technologies should you consider buying to meet your company's needs? Here are some shopping tips.
SSDs have clearly won the battle for primary storage. This is especially true if some of the sacred cows of the enterprise drive are sent packing. Dual-ported drives no longer make much sense when there is appliance-level data integrity, while top-performance NVMe SSDs are required only for the most demanding jobs.
In fact, with a performance of 40K to 80K IOPS, compared with a hard drive’s paltry 150-300 IOPS, any SSD is much faster, so right-sizing to the application can save a lot of money.
Those million+ IOPs NVMe drives fit big data apps and big databases really well, but a cheap SSD will likely do the job for a web server or even for a virtualized engine. What about wear life? Most SSDs are quite durable and vendors often claim one to five daily whole drive writes for five years as the wear life. That’s more than enough for apps that lean towards reads over writes; selecting the right wear life profile can still save a lot of money.
Last year, I predicted that we’d see parity in SSD and HDD prices in 2017  and, even with the current 10% spike in NAND die prices, vendors are already at parity for nearline and fast SAS/SATA SSDs. Even the premium for NVMe is beginning to disappear and the new miniature PCIe form factors are helping to drive this.
3D NAND  will further drive prices down, bringing parity in bulk storage within reach. Once SSDs reach 30 TB or so later in 2017, the market for smaller (10 TB) nearline hard drives will collapse and SSDs will rule. The impact on both free-standing servers and hyper-converged appliances will be a reduction in cluster size of as much as 50% as each unit can do much more with the faster storage.
Whether buying drives direct from OEMs or through distribution, purchase only as much as you need given the expectation of declining prices. Look to compression and deduplication as a way to reduce future drive purchases. Most importantly, look at using low-cost SSDs with a decent whole drive daily write specification wherever possible. Typical vendors for these are Samsung, WD’s SanDisk and Micron, together with Intel, Toshiba, and Seagate.
NVDIMMs  are relatively new server technology that’s getting a good deal of attention. The technology is getting a lot of support, so there are already uses as fast drives that are marketable and stable. Early models are around 4x faster than SSDs, but expect a real surge in 2018 as byte-addressable versions hit the shelves, based on PCM or ReRAM  technology. The key to the surge is software and the changes are extensive, from compilers and operating systems to applications. Likely in-memory databases such as Oracle will be the first concrete users of byte-mode. For other apps, time will tell where the barriers to entry lie.
There are two models being pushed for storage today: the traditional “SAN” type of structure with shared storage in dedicated appliances, and the virtualization model that converges storage and server functions into the same box. The reality is that these are two facets of the same thing. Storage arrays have given way to smaller appliances that look just like a server and this is in fact what drove hyperconvergence. The real difference today is software.
There are several sources for hyperconverged infrastructure, ranging from leader Nutanix to several startups. Most of Nutanix's sales today come from pre-integrated bundles sold by major server or storage vendors, offering quick deployment and one-stop support, but Nutanix is opening up to Chinese and Taiwanese ODMs and their products will cost much less, so expect a price war in a year or so. Put different code on any of these hyperconverged boxes and they are Ceph object stores or Gluster nodes.
Choosing an interface for networked storage is a question of where you are in transition to an all-Ethernet environment. If you are heavily invested in Fibre Channel, you might defer the move to Ethernet for yet another generation, but at some point Ethernet+RDMA and NVMe will be compelling. Ethernet is the better solution for green-field installations.
Secondary storage remains a work-in-progress in all of this. Typically satisfied by large boxes with 60+ cheap hard drives, this area of storage is likely to go the way of the appliance, which typically runs to 12 drive bays. With four NVMe drives providing 40 TB capacity and millions of IOPS, the other eight bays are good for secondary bulk storage. Couple in compression/deduplication and 100 TB SSDs and we are looking at maybe up to 4 petabytes of bulk space in that same 2U appliance. That’s likely overkill for most users!
(Image: Timofeev Vladimir/Shutterstock)
All-flash arrays  are a quick way to extend the life of a SAN. They are plug-and-play, relatively inexpensive and very easy to install. Given the high price of array drives, the AFA is a no-brainer for most shops. The performance boost from AFAs has allowed enterprises to postpone decisions on their eventual data center game plans for a year or two until the industry roadmap stabilizes.
AFAs all contain good compression capability, which can be applied to what is now secondary RAID storage. This means that not only has performance expanded by perhaps 1000X, effective capacity has increased by around 5X. That’s one heck of an ROI!
AFAs will feel pressure from clustered small storage appliances using SSDs, whether hyperconverged infrastructure or not. Today, one AFA can run rings around any appliance, because of internal data pathways, and other functionality. In a year or two, that edge will have been eroded by faster servers and drives, on the one hand, and much cheaper appliances on the other. Time will tell the winner on this, but AFAs still have Fibre-Channel and SAN connectivity to fall back on.
(Image source: Pure Storage)
White-box vendors vs. traditional suppliers
Vendors such as Quanta and SuperMicro are selling vast quantities of storage appliances and servers in the cloud market, offering excellent quality at low prices. Traditional vendors are countering by bundling hardware with software and pre-integration, giving us converged and hyperconverged solutions.
Doing your own integration is much easier today because of the COTS model, while the low-cost vendors and third-party shops selling pre-integrated solutions. This is a price war and in the end, buyers will benefit.
One challenge of going the white-box route is buying your own drives. Distribution does a pretty good job of sourcing at low prices, especially single-port commercial SSDs. NVMe used to be “OEM-only” but that barrier to free trade is falling apart and NVMe is available, though perhaps not for the latest drives. Expect the drive market to become much more open going forward.
Choosing object storage
There are two models for object storage: Open-source code like Ceph and licensable software. Both models are available via a variety of channels, ranging from self-integration onto a COTS platform, to buying a fully-integrated and supported brand-name box.
Ceph is lagging in features behind Scality and Caringo, the market leaders in object software, having emphasized universal storage (block-IO, NAS and object access modes) over compression and deduplication, for example, so a given Ceph installation might cost more for the same effective capacity if you need those features. The alternative is to pick up a third-party compression package as part of an auto-tiering solution, in which case Ceph probably is cheapest.
There are other object solutions available. These are completely bundled alternatives, of which DDN’s WOS is noteworthy since it has outstanding throughput and random access performance, though at a price.
Hybrid cloud storage
One major barrier to implementing a hybrid cloud is the data storage model required. The primary aim of having agile cloud-bursting is proving difficult because of the latency of storage access between private and public cloud sections. Cloud-bursting is only possible if data is present when you start instances!
One recent resolution of this moves primary storage into the cloud. This may seem counter-intuitive, but being in the cloud allows data protection via replication or erasure coding to be geographically dispersed, adding disaster readiness to normal operations for very little cost. With a cloud-based storage model, fast caching gateways assure decent performance for the private cloud, though a try-out is a good idea to make sure your use case is covered.
Typically, the cloud-based hybrid storage model is delivered as storage-as-a-service by companies such as Zadara Storage, Clearsky Storage and Cloudian.
Handling big data
Big data can be handled by either an object storage system or by a scale-out, parallelized file system, such as Gluster, Lustre, IBM’s GPFS, Hadoop Distributed File System or Riak. The characteristics of the data in your use case will tend to drive your choice, with structured commercial data usually heading to HDFS or Riak, or to object storage.
The file systems alternatives Gluster and Lustre fit scientific data applications, where their use is well entrenched. There are plenty of other examples, such as CERN's use of Ceph object storage, for example, so there is no hard and fast rule on choices.
Ceph object storage, Gluster, Lustre and HDFS are open-source code today, with Red Hat supporting the first two. GPFS can be licensed to run on COTS servers.
There's a lot of buzz today about “software-defined storage,” but in reality it’s hard to define what SDS is and what software meets its goals, especially with almost every piece of storage code getting hyped up. SDS is still embryonic, but packages are beginning to emerge. Characteristics include virtualization, whether to VMs or containers, no proprietary hardware dependencies and some very innovative approaches to processing data.
Some mainstream packages are on their way to SDS already. Ceph can be virtualized and separated from the storage hardware, for example, while Tegile and Rubrik are supporting extended metadata control of dataflows, which to my mind is where we are eventually going to arrive.
Coupled with code that homogenizes the underlying storage nodes, form a mixed variety of vendors and using a variety of protocols, packages of this type will change storage in major ways, ultimately providing much more automation coupled with richer features.
Admins need to pay attention to SDS. It will arrive very rapidly and be a game changer in the agile data center of the near future. Remember that SDS is a set of point solutions, which make sense individually today, but will integrate and add more value as the SDS approach evolves and standards coalesce.