Solid State 101: Where To Deploy SSDs Now
Some solid-state systems are now competitive with high-performance 15,000-rpm drives.
February 15, 2012
Say you want to deploy solid-state storage now. The most popular way--if Fusion-io's triple-digit sales growth is any indication--is via internal PCIe-based cards loaded with flash or DRAM and typically sporting capacities in the 300-GB to 1.4-TB range. To the server operating system, these cards look like any other disk device, with LUNs that can be used locally or exported on a storage area network. The main problem: They're not easily sharable. So, instead of plugging flash cards into individual servers, why not build an entire storage array out of them?
Until recently, that would have been a preposterously expensive proposition. However, Moore's Law eventually crushes even the most stubborn semiconductor price barriers, and some solid-state systems are now competitive with high-performance 15,000-rpm drives. Erik Eyberg, senior analyst at Texas Memory Systems, says his company's arrays based on multilevel cells (MLCs), at about $12.50 per gigabyte, will soon approach the cost of 10,000-rpm disks.
The secret to this new breed of silicon storage is that the systems are designed from the ground up to be all solid state, all the time. They ditch the disk controller architecture, instead relying on custom silicon and software to perform memory management, load balancing, write leveling, and redundancy. On the inside, they look much like a server with banks of memory modules. But on the outside, they expose standard storage interfaces--native Fibre Channel or Ethernet/iSCSI--and disk LUNs. This solid-state-optimized design yields truly astounding performance. Systems using single-level cell flash typically exceed 1 million IOPS rates for both reads and writes, while MLC-based arrays run in the 300,000 to 500,000 IOPS range.
Another class of all-solid-state storage takes a less revolutionary design approach, substituting banks of flash modules or cards with an array of SSDs. Products like those from Pure Storage and WhipTail use a more conventional controller architecture and are, essentially, just an all-solid-state variant of what big storage systems, like those from EMC, HP, and NetApp, do in their hybrid arrays that integrate shelves of SSDs into a larger disk-based system. In the latter case, IT can partition capacity into LUNs that are either purely SSD-based or a hybrid setup in which the controller's auto-tiering software automatically moves frequently accessed data onto SSD.
Flash cards and SSDs are also showing up in so-called scale-out appliances--typically small storage devices that can be aggregated into larger pools. A local OS and distributed file systems are coupled with centralized management and control software. Initially, scale-out boxes resembled network-attached storage appliances that could be assembled into large virtualized storage arrays; however, some vendors now favor the beefed-up processing that comes from using standard x86 motherboards and CPUs to host general-purpose hypervisors as well.
Go to the main story:
2012 State Of Storage: Year Of SSDs
InformationWeek: Feb 27, 2012 Issue
Download a free PDF of InformationWeek magazine
(registration required)
Read more about:
2012About the Author
You May Also Like