Vendors Turn Flash To Cache, Saving Cash
Ever since EMC announced that it was putting SSDs--or, as it calls them, Enterprise Flash Drives--into its disk arrays, we as an industry have been straining our little brains to figure out the best way to use flash memory to improve our storage. We've used flash as a storage tier inside disk arrays, as a cache in those arrays and as dedicated storage systems. More recently, we've started seeing a variety of server- or system-side caching solutions. Is server-side caching the answer?
October 20, 2011
Ever since EMC announced that it was putting SSDs--or, as it calls them, Enterprise Flash Drives--into its disk arrays, we as an industry have been straining our little brains to figure out the best way to use flash memory to improve our storage. We've used flash as a storage tier inside disk arrays, as a cache in those arrays and as dedicated storage systems. More recently we've started seeing a variety of server- or system-side caching solutions. Is server-side caching the answer?
There’s much to be said for server-side SSD caching. Since you’re buying the SSD separately, you can buy the speed and capacity you need without paying the not-insignificant markup array vendors typically charge for blessing their favorite device. You can use PCIe flash devices to minimize latency on cache hits, and, perhaps most importantly, you can take advantage of flash caching regardless of what kind of storage you use. Small sites can even use SSDs to cache local disk storage, either through software solutions or the cache extensions LSI and Adaptec have for their RAID controllers.
They can then use VSAs (virtual storage appliances) to share the local hard disks if they need shared storage. Alternatively, since Hyper-V 3.0 in Windows Server 8 supports Live Migration without shared storage, the combination of direct-attached storage and SSD caching could be a cost-effective solution for many users and applications.
Fusion-IO snapped up cache software startup IO Turbine just weeks after it came out of stealth. Since IO Turbine’s software provided caching for vSphere hosts, it was a great match, making Fusion IO’s PCIe cache applicable to vSphere systems.
Several other vendors have caching software for Windows and/or Linux systems, which would of course include Hyper-V, KVM and Xen virtual server hosts. Flashsoft’s FlashSoft SE was first out the gate with a software-only solution, while STEC and Marvell and storage giant EMC have hardware/software combinations to add caching to your server. EMC’s project Lighting promises to coordinate a PCIe flash card in the server with a back-end disk array’s own flash and RAM cache. The concept is intriguing, but the proof of the pudding will, as usual, be in the eating, so I’m waiting to see the details.
All of these vendors implement their caches at the block level, so they have to cache all I/Os to a volume. Startup NEVEX’s CacheWorks implements the cache as a file system filter for Windows. This allows administrators to have more control over what data gets cached. You can install the cache in a Hyper-V host and select which guest VHDs you want to cache or cache databases while excluding log files and the like. VMware users can load CacheWorks in their Windows guests.
The SSD cache phenomenon has also made its way into the workstation market, where hardware solutions include Marvell’s latest SATA controller chip and Intel’s Z68 chip set with integrated caching, along with software solutions like Nvelo’s DataPlex, which OCZ has been bundling with some of its SSDs. While hybrid drives like Seagate’s Momentus XT make sense for laptops, (I use one in mine), where space is an issue, for desktops I’d rather have the flexibility to choose hard disk and cache SSD separately.
Disclaimer: None of the companies mentioned are clients of DeepStorage.net. Seagate did provide me with a pair of Momentus XT drives free of charge.
Read more about:
2011About the Author
You May Also Like