Information Strategist: Fixing Storage Before It Fixes Us

Despite advances in archiving, compression and other storage technologies, the unmanaged growth of data isn't going away soon. We have to fix the broken storage model before it's too

February 28, 2007

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

A couple of years ago, a smart guy from then StorageTek, Rob Nieboer, concluded a presentation about business data growth with a sanguine observation. Because of the inefficient way storage is packaged, he noted, coupled with the inefficient way consumers use storage, the cost of storage infrastructure would eventually bankrupt many businesses.

That gem of wisdom has stayed with me ever since--and it should be on the minds of many IT strategists today.

Despite the news about terabyte-sized SATA drives coming this spring, despite advances in archiving technology to help cull the older data that fills our storage "junk drawers" to the brim, and despite advances in compression and deduplication technologies that are now being released in the form of specialty software or appliances, the unmanaged growth of data isn't going away soon. Like death and taxes, data growth is inevitable, meaning so, too, is storage growth.

Bigger drives, however, don't address the cost conundrum. While it's true that disk capacity has doubled every 18 months since the early 1990s, and that the cost per GB has concurrently decreased by 50 percent each year, this hasn't translated into lower overall storage costs. Understanding that the component parts are becoming commodities, most array vendors seek to grow their margins with new "value-add" features and functions for their controllers. The result has been a steady increase in the price of an enterprise array, cheaper drives notwithstanding.

"Value-add" is an interesting thing. In a visit to a large enterprise storage consumer late last year, the complaint registered by a storage administrator was telling: "The vendor places a lot of software on the box. We have to pay licenses for 100 percent, but we only use about 10 percent. We replace a lot of the functionality with best-of-breed software from third-party software houses because it works better for us." Because of the way storage is packaged and sold, we buy more functionality than we need and use only a fraction of what we buy. So, where's the "added value"?Fundamentally, one might ask, why implement features in the array that are better implemented elsewhere in the storage network? Numerous technologies have begun to appear in the market that are challenging the monolithic architecture of "value-add arrays." These include: Gear6's network-based memory cache; Acopia Network's data management switch; Crossroads' storage routers; Caringo's platform-agnostic content indexing software and gateway; LeftHand Networks' hardware agnostic iSCSI clustering; and DataCore Software's Traveler continuous data-protection software. These and other products let us build storage intelligently and economically, buying only what we need based on what our applications require.

However, many vendors continue to pursue an everything-on-the-array approach. They argue that they are just providing what consumers want: a one-stop-shop array with a one-throat-to-choke sales and maintenance agreement. But what about the dark side? The consumer pays a premium for storage infrastructure that isn't necessarily tailored to its applications, and can even create vendor lock-in.

These vendors cite their sales numbers, high and to the right, that appear to underscore this point. Even if one-size-fits-most storage doesn't fit anyone's needs very well, it doesn't stop folks from buying it.

Still, with storage hardware accounting for 33 percent to 70 percent of IT hardware spending in large enterprises, you almost have to wonder whether one-throat-to-choke doesn't translate into one-throat-to-cut--the consumer's. I suspect that a reckoning will be coming in short order. Auditors eventually will pull out their TCO calculators--hammers that make any expense look like a nail--and questions will be asked.

In the final analysis, cost itself will require that we fix the broken storage model before it fixes us. Going forward, I'll look at alternative strategies for addressing the data deluge. Stay tuned.Jon William Toigo is a CEO of storage consultancy Toigo Partners International, founder and chairman of the Data Management Institute, and author of 13 books. Write to him at [email protected].

Read more about:

2007
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights