The NVMe Transition

A look at how vendors implementing the new storage protocol in their products and what to consider before integrating the technology into your data center.

Chris M Evans

May 31, 2018

5 Min Read
NetworkComputing logo in a gray background | NetworkComputing

The buzzword of the moment in the storage industry is NVMe, otherwise known as Non-Volatile Memory Express. NVMe is a new storage protocol that vastly improves the performance of NAND flash and storage class memory devices. How is it being implemented and are all NVMe-enabled devices equal? And what should IT infrastructure pros consider before making the NVMe transition?

Background

NVMe was developed as a successor to existing SAS and SATA protocols. Both SAS and SATA were designed for the age of hard drives where mechanical head movement masked any storage protocol inefficiencies. Today with NAND flash, and in the future with storage class memory, the bottlenecks of SAS/SATA are more apparent because NAND flash is such a high-performance persistent media. NVMe addresses these performance problems and also implements greater parallel operations. The result is around a 10x improvement in IOPS for NVMe solid-state drives compared to SAS/SATA SSDs.

Adoption models

Storage vendors are starting to roll out products that replace their existing architectures with ones based on NVMe. At the back-end of traditional storage arrays, drives have been connected using SAS. In recent weeks, both Dell EMC and NetApp have announced updates to their product portfolios that replace SAS with NVMe.

Dell EMC released PowerMax, the NVMe successor to VMAX. NetApp introduced AFF A800, which includes NVMe shelves and drives. In both cases, the vendors claim latency improves to around the 200-300µs level, with up to 300GB per second of throughput. Remember that both of these platforms scale out, so these estimates are for systems at their greatest level of scale.

null

DellEMC powermax.jpg

Pure Storage recently announced an update to its FlashArray//X platform with the release of the //X90 model. This offers native NVMe through the use of DirectFlash modules. In fact, the FlashArray family has been NVMe-enabled for some time, which means the transition for customers can be achieved without a forklift upgrade, whereas PowerMax and AFF A800 are new hardware platforms.

NVMe is already included in systems from other vendors such as Tegile, which brought its NVMe-enabled platforms to market in August 2017. Vexata has also implemented both NVMe NAND and Optane in a hardware product specifically designed for NVMe media. The Optane version of the VX-100 platform can deliver latency figures as low as 40µs with 80GB/s of bandwidth in just two controllers, Vexata claims.

End-to-end NVMe

A new term we’re starting to see emerge is end-to-end NVMe. This means that from host to drive, each step of the architecture is delivered with the NVMe protocol. The first step was to enable back-end connectivity through NVMe; the next step is to enable NVMe from host to array.

Existing storage arrays have used either Fibre Channel or iSCSI for host connections. Fibre Channel actually uses the SCSI protocol and of course, iSCSI is SCSI over Ethernet. A new protocol, NVMeoF, or NVMe over Fabrics, allows the NVMe protocol to be used on either Fibre Channel or Ethernet networks.

Implementing NVMeoF for Ethernet requires new adaptor cards, whereas NVMeoF for Fibre Channel will work with the latest Gen5 16Gb/s and Gen6 32Gb/s hardware. However, it’s early days for both of these protocols, so don’t expect them to have the maturity of existing storage networking.

Controller bottlenecks

One side effect of having faster storage media is the ability to max out the capability of the storage controller. A single Intel Xeon processor can fully exploit perhaps only four to five NVMe drives, which means storage arrays may not fully exploit the capabilities of the NVMe drive itself.

Vendors have used two techniques to get around this problem. The first is to implement scale-out architectures, with multiple nodes deploying compute and storage;  WekaIO and Excelero use this approach. Both vendors offer software-based solutions that deliver scale-out architectures specifically designed for NVMe. WekaIO Matrix is a scale-out file system, whereas Excelero NVMesh is a scale-out block storage solution. In both instances, the software can be implemented in a traditional storage array design or used in a hyperconverged model.

The second approach is to disaggregate the functions of the controller and allow the host to talk directly to the NVMe drives. This is how products from E8 Storage and Apeiron Data work. E8 storage appliances package up to 24 drives in a single shelf, which is directly connected to host servers over 100Gb/s Ethernet or Infiniband. The result is up to 10 million read IOPS and 40GB/s of bandwidth at latency levels close to those of the SSD media itself.

Apeiron’s ADS1000 uses custom FPGA hardware and hardened layer 2 Ethernet to connect hosts directly to NVMe drives using a protocol the vendor calls NVMe over Ethernet. The product offers near line-speed connectivity with only a few microseconds of latency on top of the media itself. This allows a single drive enclosure to deliver around 18 million IOPS with around 72GB/s of sustained throughput.

Choices

So what’s the best route to using NVMe technology in your data center? Moving to traditional arrays with an NVMe back-end would provide an easy transition for customers that already use technology from the likes of Dell or NetApp. However, these arrays may not fully benefit from the performance NVMe can offer because of bottlenecks at the controller and delays introduced with existing storage networking.

The disaggregated alternatives offer higher performance at much lower latency, but won’t simply slot into existing environments. Hosts potentially need dedicated adaptor cards, faster network switches, and host drivers.

As with any transition, IT organizations should be reviewing requirements to see where NVMe benefits their needs. If ultra-low latency is important, then this could justify implementing a new storage architecture.

Remember that NVMe will -- in the short-term at least -- be sold at a premium, so it also makes sense to ensure the benefits of the transition to NVMe justify the cost.

About the Author

Chris M Evans

Chris M Evans has worked in the IT industry for over 27 years. After receiving a BSc (Hons) in computational science and mathematics from the University of Leeds, his early IT career started in mainframe and followed both systems programming and storage paths. During the dot.com boom, he also co-founded and successfully floated a company selling music and digital downloads. For most of the last 20 years, Chris has worked as an independent consultant, focusing on open systems storage and more recently, virtualization and cloud. He has worked in industry verticals including financials, transport, utilities and retail, designing, deploying and managing storage infrastructures from all the major vendors. In addition to his consultancy work, Chris writes a widely read and respected blog at blog.architecting.it and produces articles for online publications. He has also featured in numerous podcasts as a guest and content provider.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights