Will Ethernet Take Over The Spinning Disk?

With its open Ethernet drive architecture, Western Digital's HGST subsidiary joins Seagate's Kinetic drives in using an Ethernet port to access an object store on the disk.

Howard Marks

May 16, 2014

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

It must be a tough time to be running a disk drive company. Flash is taking the high performance, and high margin, top end of your market, and since Seagate and Western Digital each have more than 40% market share, you can’t really acquire market share. To insure the future of a hard disk drive company, you have to diversify into SSDs and add value beyond mere capacity to your spinning disks, such as an Ethernet interface.

While the ATA-over-Ethernet (AoE) protocol at the heart of Coraid’s storage systems was designed to allow a server to send block storage commands directly to individual disk drives, the concept has never caught on in the mainstream. Even in the Fibre Channel world where each drive was individually addressable, software RAID and JBODs never made sense -- it places too much of a load on the host’s processor.

The new generation of Ethernet drives isn't just using Ethernet as a new connection interface; it's also moving the communications protocol from simple commands to read-and-write data blocks to a higher level of abstraction. At this week’s OpenStack Summit, Western Digital’s HGST subsidiary demonstrated its open Ethernet drive architecture. Like Seagate’s Kinetic drives, it uses a 1 Gbit/s Ethernet port to access an object store on the spinning disk.

By implementing the basic Object Storage Device (OSD) on the disk drives, this new generation of Ethernet drives offloads media management and even data placement for object storage systems from the storage server to the disk drive. This lets architects of object storage systems like OpenStack Swift manage a hundred or more Ethernet-connected drives with a single server that would ordinarily manage 12 to 24 SAS or SATA drives.

There are some significant differences between HGST’s open Ethernet drives and Seagate’s Kinetic. Seagate decided to implement a simple key-value store and support vendors like SwitftStack and Scality to interface their object stores to it. HGST gives developers an even more open playing field by letting them run an instance of Linux on each disk drive.

Each HGST open Ethernet architecture drive has a 32-bit ARM processor and memory in a system-on-chip ASIC. Storage system developers can recompile the code for their basic storage building block for the ARM and run it on the drive. HGST claims this means developers can run their native code rather than writing connectors to Kinetic’s key-value pair API. Given the amount of support I’ve seen for Kinetic, I don’t think this is a big deal for anyone but a developer that hasn’t gotten Kinetic working yet.

Unlike Seagate, HGST hasn’t announced an actual product and is calling this a technology demonstration. At the OpenStack Summit, it demonstrated the Swift OSD running on 4TB drives. It also showed a 4u 60-drive chassis with built-in Ethernet switch and 10 Gbit/s uplinks.

As the storage market bifurcates into performance and capacity markets, Ethernet disk drives may be a great solution for exabyte-scale storage. Ethernet disk drives, like ARM-based microservers, are based on the assumption that it’s more cost effective to spread a workload --  in this case a storage system --  across thousands of low-cost processors than to run it on a few more powerful Xeons. Western Digital and Seagate are betting they can make a little more margin on each Ethernet-connected disk drive and still make it less expensive than using Xeon servers to provide OSDs.

If they can, there will be a real market for them as the perfect place to keep our never-ending pile of stored rich media in both public and private cloud stores. However, if even exabyte-scale object stores can’t see a significant cost savings, Seagate and Western Digital may not be able to sell enough Ethernet interface drives for the market to find other appropriate use cases, or for them to recoup their development costs.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights