Dataram's XcelaSAN Caches Block I/O
We've started thinking of flash based SSDs as the mainstream go-fast solution for applications starved for random I/O performance, but today's SSD solutions require storage administrators to relocate your hot data to the small amount of SSD you can afford. Wouldn't you rather just drop a magic acceleration appliance in your SAN that makes everything faster? Dataram hopes you do.
September 29, 2009
We've started thinking of flash based SSDs as the mainstream go-fast solution for applications starved for random I/O performance, but today's SSD solutions require storage administrators to relocate your hot data to the small amount of SSD you can afford. Wouldn't you rather just drop a magic acceleration appliance in your SAN that makes everything faster? Dataram hopes you do.
Strictly speaking, there's no magic in the XcelaSAN box, just 128GB of mirrored NVRAM cache with all the data protection features Dataram -- with 40+ years in the memory business -- could come up with. Those features include ChipKill and flash to dump the cache to in the case of power loss. This means that you don't have to do battery maintenance or worry if power will be restored before the cache battery dies. The truly paranoid, like me, can even mirror pairs of XcelaSANs for greater redundancy.
An XcelaSAN box has 8 4gbps fibre channel ports, each of which can be configured to be either a target or initiator. In a typical environment, the SAN admin would set 4 ports to each mode and re-zone his FC fabric so the servers saw the target ports and the storage arrays the initiators. The XcelaSAN transparently passes the WWNs of servers and arrays to each other so there's no need to reconfigure LUN masking or server side drive mappings. You can then configure the XcelaSAN to apply write-through, write-back or no caching to each LUN through its web interface.
Since XcelaSAN is a cache it will automatically identify hot data within the LUNs it's caching in real time. The fact that the cache is being updated in real time means that 1GB of cache can provide the same amount of application acceleration as 4-40GB of SSD.
Typical SSD systems today replace disk LUNs with faster flash LUNs. The SAN admin or DBA identifies the hot data and segregates it to the flash LUN. Since applications like to keep their data together, segregating the hot blocks from the cooler ones is time consuming and some cool blocks will inevitably be moved to flash. Applications like Exchange that store their whole database in one file exacerbate this effect.The best SSD solutions can relocate hot blocks of data from spinning disk to SSD on a sub-LUN level, but even Compellent's automatic storage tiering or promised EMC's FAST only do so once a day as a scheduled process. A cache like XcelaSAN can accelerate VDI workstation startups in the morning and then reuse the same cache to accelerate the end of day batch processes in SAP in the evening, without a SAN admin making changes or writing scripts.
Naysayers will retort that caches have been of limited value speeding up random I/O to databases in the past, and that for the $65,000 Dataram wants for an XcelaSAN, users could buy significantly more than 128GB of flash, even at EMC's prices.
Truth is, to speed up truly random I/O you need a huge cache but most business apps aren't that random, and an XcelaSAN has a pretty big cache (four times the cache of a Clariion CX-4 960). Plus, you can add an XcelaSAN to your existing SAN over the weekend even if you, like me, are running arrays too old for flash SSD upgrades.
About the Author
You May Also Like