The Argument for SSDs: Saving Storage Dollars

Solid-state disks, once a cost-prohibitive way to store data, can help IT organizations drive big performance gains while actually lowering costs

January 7, 2009

8 Min Read
NetworkComputing logo in a gray background | NetworkComputing

No longer is the business mantra merely to just "Do More With Less." It's more like "Do More With Nothing!" In the midst of a Category Five economic hurricane, IT budgets are being cast asunder while IT managers are still expected to shrink the data center footprint, consume less power, reduce management overhead, cut capital budgets, and shave operating expenditures. To further compound the pain, with each successful cost cutting initiative successfully implemented, IT planners have fewer viable choices to continue meeting business edicts over time.

Fortunately there are still some creative ways for IT planners to gather some low-hanging, cost-cutting fruit while actually gaining significant performance enhancements that present no risk to the business. One answer is solid-state disks (SSDs), once a cost-prohibitive way to store data that now can help IT organizations drive big performance gains while actually lowering costs.

Mechanical disk drive technology has served as the primary storage medium for decades. Whether businesses purchased internal, direct-attached, or network-based storage for their applications, the basic architectural model was the same: Provide some sort of cache/silicon-based memory frontend (whether internal to the server, onboard a disk controller, or a special reserve inside an array) to a larger mechanical disk storage pool at the backend. The idea is to try and keep real-time data inside cache memory long enough so that application response wouldn't suffer from the delays incurred waiting for I/Os from spinning disks.

The challenge here is that cache represents a very small percentage of the overall storage backend. As a result, storage and application administrators need to continually tune and reconfigure systems to ensure the most frequently accessed data sets stay in cache, or at the very least can be accessed quickly off of disk by striping data sets across a large number of very fast and expensive disk drives.

There are several techniques or workarounds that application and system administrators generally employ to overcome cache memory constraints. In short, they include configuring large pools of high-speed (15,000 rpm) Fibre Channel drives in conjunction with multiple high-performance servers running multi-threaded applications.When tuning backend disks for use in online transaction processing (OLTP) environments, one best practice is to confine I/O to the outer portion of a spinning disk platter. In this manner, the disk arm does not have to seek for or write data on the innermost (slowest) parts of the platter. This term is referred to as "short stroking." While this technique does deliver higher performance, it comes at a large premium. Short stroking drives negates 50 percent to 75 percent of the effective capacity of any given storage subsystem.

Another technique that is often used in tandem with short stroking is writing large stripes across multiple disk drives. The idea is to leverage many independent drives to diminish I/O times. But again, this comes with a significant cost penalty. Lastly, in order to maintain performance, many high-end multi-core servers are required to keep disk resources occupied processing multiple application threads. In short, when combining short-stroking disk methods with large stripes and a farm of high-end servers, the fully burdened costs (hardware, software, maintenance, etc.) become very substantial and perhaps even cost prohibitive, especially when these approaches are carried out on high-end storage platforms.

A simpler, more elegant solution would be to eliminate spinning disk altogether and store everything in silicon, or SSDs. For most businesses, despite the dramatic decrease in costs of SSD, this approach is still not economically viable.

A very effective strategy, however, is to pair a SSD array with a traditional Tier 2 storage system to gain the benefits of a much larger cache store with the economies of scale afforded by reliable mid-range disk drive technology. This approach offers enormous performance and operational efficiencies, especially for those organizations that have an investment in Tier 1 storage technology. Indeed, the savings alone derived by pairing an SSD array with a Tier 2 storage system is literally an order of magnitude less then what it generally costs to procure a similarly configured, standalone, high-end storage system. What makes this approach even more enticing for IT decision makers is that an SSD/Tier 2 storage architecture delivers performance enhancements well beyond what can be achieved by a fully loaded "top of the line" Tier 1 storage configuration.

To Page 2Perhaps sensing that there is a market opening up for SSD-focused challengers, the traditional storage manufacturers are beginning to offer Flash memory add-ons to their existing storage products -- essentially an additional layer to "tiered storage in a box."

This may at first seem a logical approach for IT organizations. They likely already have a business investment and partner relationship with their established storage partner; why not just buy a shelf of SSD from them? There are challenges and some distinct disadvantages to retrofitting or bolting on SSD to a traditional storage array to factor into the decision.

For starters, traditional storage arrays were specifically designed to manage I/O flow between a limited cache memory store and a larger archive pool of mechanical disk drives. In short, the overhead (latency) inherent to the design of traditional storage platforms acts as a speed barrier to SSD resources. There are now faster storage resources available in the array (the SSDs), however they will not be capable of being "throttled" to their true effective high speeds.

A second disadvantage to offering SSD as a component of tiered storage in a box is a reduction of available space and resources normally reserved for spinning disk drives. For instance, one or two SSD drives could consume all the available bandwidth normally shared across 12 conventional disk drives, resulting in a substantial drop of available storage capacity plus the cost of a three-quarter empty shelf. Furthermore, only modest capacities of SSD will be available in a consolidated storage frame -- somewhere between 80 Gbytes to 160 Gbytes of SSD space.

To take advantage of the performance capabilities of SSD technology, traditional suppliers may have to design newer, smaller shelves or develop a totally redesigned storage system from the ground up, optimized for SSD. The challenge here is that the components needed for maximum SSD performance are overkill for mechanical drives. The choice will often be full-performance SSD with overpriced mechanical drives or low-performance SSD with correctly priced mechanical hard drives.In contrast, SSD-only suppliers provide "purpose built" storage arrays or appliances that were built from the ground up to optimize I/O throughput on SSD storage. SSD-only systems are not hindered by any of the pre-existing architectural limitations handicapping traditional storage platforms, so as a result end users can expect to see significantly better performance on those kinds of SSD platforms. In fairness, if they added support for mechanical drives, SSD suppliers would have the same overkill problem as stated above. Instead, SSD-only manufacturers focus on smaller data sets that intrinsically benefit from SSD-class performance and are designed to coexist with traditional mechanical drive based storage solutions.

The capacity per rack unit of SSD arrays is more efficient than the comparable SSD offerings from traditional manufacturers. SSD arrays offer as much as 20 times the density of the SSD shelves offered by the traditional storage suppliers. SSD-specific systems provide as much as 4 Tbytes in a few rack units of space and up to 20 Tbytes in a fully populated rack.

To be sure, not all data deserves to be stored on SSD. Only the most performance-demanding, business-critical data should be stored on SSD. Examples include OLTP application data, frequently accessed database tables, log files, and enterprise resource management environments. While the cost per gigabyte of SSD storage is higher than conventional disk, IT decision makers also need to take into account the return on I/O. In other words, if more I/Os translates into more profits, then moving to SSD for business data becomes very compelling.

It is especially important to consider the cost per I/O as opposed to the cost per gigabyte. Systems that require inordinately large disk drive configurations, short-stroking techniques, and multiple application servers are using very expensive methods for attaining performance. Furthermore, that approach flies directly in the face of consolidation and green initiatives as more disks translate into more floor space, more power/cooling, more capital costs, higher management overhead, etc.

Perhaps a better metric to apply is the cost of "manufacturer I/O." As performance-built I/O workhorses, SSD arrays scale much more elegantly than their disk array counterparts and natively provide the performance required for I/O intensive environments.Even if IT planners can't draw direct correlations between higher performance and greater profitability, justifying an SSD purchase can simply be a matter of rebuilding the backend store to realize significant savings and enhanced performance.

With the advent of lower-cost, purpose-built SSD storage, IT planners can deliver on demanding business edicts to drive efficiencies and streamline operations while dramatically improving service levels. As SSD becomes increasingly mainstream, it is essential for decision makers to be cognizant of the performance, cost, and operational tradeoffs associated with "packaged" solutions.

George Crump is founder of Storage Switzerland , which provides strategic consulting and analysis to storage users, suppliers, and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Read more about:

2009
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights