Stand-Alone Storage Virtualization

Why hasn't it caught on more than it has thus far?

December 5, 2008

3 Min Read
Network Computing logo

3:00 PM -- Stand-alone storage virtualization is the concept of providing traditional storage services without the requirement to buy the storage. The basic concept is that you standardize on the services, not on the physical storage hardware. Every time I see this drawn up on my white board and presented to me, it makes perfect sense. But why hasn't caught on more than it has thus far?

From an architecture standpoint, this functionality can be integrated into the switch/director, or an appliance can be embedded into the SAN fabric. The storage services are typically volume management, volume provisioning, snapshots, replication, and a host of other services. These services can be applied equally across a variety of hardware platforms.

While there were some young upstarts in the environment, companies like IBM Corp. (NYSE: IBM), LSI Corp. (NYSE: LSI), Hitachi Data Systems (HDS) , EMC Corp. (NYSE: EMC), and NetApp Inc. (Nasdaq: NTAP) all offer some form of this capability now. Even the upstarts have been around long enough to be considered safe -- FalconStor Software Inc. (Nasdaq: FALC) and DataCore Software Corp. , for example. Most of the inroads these systems have made tend to be for a specific capability. Data migration -- seamlessly and quickly moving from one storage hardware platform to another -- seems to be the leading use case.

Yet again, its market share has stayed relatively small. While the quality of the storage services and the usability of the interfaces tend to be more than adequate, the key challenge is the complexity -- either implied or real -- involved in integrating all this disparate physical storage hardware into the new storage services device. In addition, some of the solutions are software only, so that requires loading the software onto your own hardware to get it up and running.

There is also the issue of supportability. If there is a failure, whose fault is it? At the end of the day, that one-throat-to-choke concept sounds pretty good if your systems are down.I think the big issue is user preference. While it may sound good to standardize on one set of storage services and then buy anyone's hardware, most customers don't tend to do this. I think this is because the real reason that most people buy a storage system is what the storage software can do.

Most vendors now offer multiple tiers of storage in a single box (SSD, Fibre, SAS, SATA). Most vendors offer multiple protocols support (FC, iSCSI, FCoE). And many vendors have some sort of NAS or file services offering, although the quality of NAS products is still a key differentiator. All these choices from a single vendor make the need for supplier flexibility less of an issue.

The result is most people buy from a manufacturer because there is a specific capability, like thin provisioning, wide striping, or quality NAS services, that pushes the vendor to the top. The capability may not be only software -- it also could be a hardware architecture that delivers performance that may somehow be compromised if control of the storage hardware is surrendered to the stand-alone storage virtualization device.

So when will these devices become more mainstream? When they offer all the services that you need, without compromise. If all you need is basic storage services, that may be today for your data center. If you want some of the newer storage services or are concerned about support, then it is going to be a while.

George Crump is founder of Storage Switzerland , which provides strategic consulting and analysis to storage users, suppliers, and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.6668

Read more about:

2008
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights