Startup StorSpeed Applies Intelligence To Application Caching

StorSpeed, a startup company fresh out of stealth mode, has introduced a cache appliance that makes the company calls "application aware" to balance transfer speed and I.T. costs in medium-sized companies.

October 16, 2009

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

StorSpeed, a startup company fresh out of stealth mode, has introduceda cache appliance that makes the company calls "application aware" tobalance transfer speed and I.T. costs in medium-sized companies. Spinning disks are, in the grand computer-system scheme of things, slow. It takes time for the mechanical platters to spin around to the proper spot for data to be read. Speeding storage access means moving data off spinning disks to faster (but more expensive) technology.

Mark Cree, chairman and CEO of StorSpeed, says that the company's approach has meant a lengthier time in development. "We could have built a cache in a fraction of the time, but we've applied for 8 patents on the application-aware part of this," Cree explains. He says that the application-aware technology is not only within the appliance, but also in the analysis and reporting software StorSpeed provides as part of the complete system.

Cree says, "Customers don't know which applications are requesting data in specific patterns, but they know that only about 10% of data is actually active. We can report on this as well as analyze the traffic in addition to doing the acceleration." He says that applying intelligent caching allows low-cost storage to be used as the data repository for even high-performance networks. We allow private cloud storage or low-cost dense storage to be used behind us and still get solid performance.

One of the environments Cree expects to be a heavy use case for intelligent caching is in virtualized storage and server deployments. The device provides both sides of the picture. "Virtual machines are interesting because, if you think about the way they perform, they all share a single network adapter out of the hardware. We look at the IP address and can accelerate the performance back to a single address based on the requirements of the application," Cree says. We can tier storage behind us, but also provide IOPS between filers. On the network side we can tie storage access back to a particular IP address or range, regardless of who's providing the storage.

When an I.T. department begins deployment, Cree points out that the system should not require significant change to the storage subsystem. "You drop us in, we spoof ID to both the file system and clients, and there's no operational change," he says. Deploying the appliance is designed to be fast and simple. There are no changes on either side - the appliance is a bump on the wire. There's no optimization required on the back end.Random data has been impossible to optimize. With our system, once you have a stick in the ground with an IP address, for example, you take away the randomness. We can show a chart with the IOPS by IP address. This by itself is a huge deal.

The number of cache appliances continues to grow as companies begin to expect serious performance boosts while budgets remain constrained. Balancing performance needs and business expenses is a critical issue that will remain critical for some time, even as the economy begins to show gradual improvement. While all caching systems move data from slow storage to faster technology, the difference between them comes in the algorithms they use to decide which data will live in specific location. Two-stage caches, application-aware caches, large block caches are all tools that can have important ramifications in data sequence. Understanding the technology and matching the cache algorithms to your specific requirements is a crucial step in successful deployment to the operations center.

Cree says that there are more subtle factors that also have an impact on overall performance. "We can also say what NOT to cache. We can say, for example, to not cache MP3 files and they simply will never be cached. You can pick filers, file systems, and file types you do or do not want cached. In the next release, you can turn things on and off by time of day or day of week so that data can be queued up in anticipation of known application or reporting runs."

The StorSpeed Blades are 2U appliances that have a MSRP of $65,000 for single node (with the 8-bay drive bay unpopulated). The Flow Director, a 24 x 10GbE switch is $16,000. Both are available from StorSpeed now.

Read more about:

2009
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights