EMC VFCache: Project Lightning Strikes
EMC's recent announcement of the culmination of the code-named Project Lightning resulted in the new VFCache solution, a server-based flash cache, which may be used as a complement or alternative to flash storage that appears as if it were a disk drive. This lightning strikes twice, though not in the same spot. The first is dramatically improved I/O performance for customers and the second is the challenge that VFCache brings to competitors trying to distinguish their own flash storage solutions
February 13, 2012
EMC's recent announcement of the culmination of the code-named Project Lightning resulted in the new VFCache solution, a server-based flash cache, which may be used as a complement or alternative to flash storage that appears as if it were a disk drive. This lightning strikes twice, though not in the same spot. The first is dramatically improved I/O performance for customers and the second is the challenge that VFCache brings to competitors trying to distinguish their own flash storage solutions.
In old radio serials, episodes would start with a short recapitulation of "what has taken place so far" so that the listener would have the context to understand the latest episode. Let's apply that to flash storage.
The solid state disk (SSD) market, notably flash storage, has been a gold rush for startups as well as large vendors for some time now. The driver behind the increased use of SSD is what is called the I/O performance gap or bottleneck. As EMC pointed out in a recent analyst briefing, CPU performance improves 100 times each decade while HDD performance has remained flat (as the rotational speed for the fastest drives hasn't changed in years and is not likely to change).
But consider that while CPU performance from 2000 to 2010 increased by 100 times, by 2020 chips will deliver 10,000 times the performance of their 2000 counterparts. A storage device's inability to process CPU-generated I/Os fast enough (i.e. the I/O bottleneck) can be a significant problem in many cases today, but obviously is on its way to becoming more or less universally intolerable.
Ta da! (Sound the trumpets). Enter flash memory stage right with the potential of improving storage I/O performance by at least two orders of magnitude. How so? In large part because flash has none of the mechanical parts that inherently limit HDD performance. Is it any wonder that there is an SSD vendor gold rush on?
EMC was the first enterprise vendor to introduce flash in enterprise storage arrays in 2008 with SSDs that appeared to the OS and application on the server and the controller on the storage array as if they were simply disk drives. What was missing at that point was that enterprises needed the ability to use flash as a tier of storage (tier 0) where only the data that was most active (i.e., hottest) would be kept in flash, and less active data would be kept in another tier (such as FC/SAS tier 1 storage hard disks).
In 2009, EMC introduced software to do just this: FAST (First Automated Storage Tiering). This enables more effective use of the SSD tier and other tiers of storage not only from a performance perspective, but also from an economic perspective (as the relatively more expensive SSD storage only holds performance-sensitive data, which is typically a small subset of all data stored).
To show the contrast between what is kept on SSD and what is managed by FAST, EMC states that it has sold over 24 petabytes of flash, but that the total amount of information under FAST management is 1.3 exabytes. The ratio of flash sold to storage managed is over 50 to 1, which indicates that a small amount of cache goes a long way. That underlines just how successful EMC has been at selling the use of flash.
Now, any number of vendors realized that simply installing SSD as a pseudo-HDD did not allow those devices to achieve their full potential. Why? Because a disk array controller managing I/O access treats all storage devices the same, even those with physical capabilities that affect I/O performance.
Developing new approaches that made better use of SSD's primary qualities was a particularly good move for smaller companies, since it allowed them to compete more effectively against larger vendors.
The approach several came up with (which is typically flash although some combination of flash and DRAM is possible) is offered as a layer known as flash cache. The two principal (but not necessarily exclusive) locations for this are as a server-based flash cache (which is housed within the server rack itself), and the server-network flash cache (which is housed in the network between the servers and storage (i.e., Ethernet) rather than the SAN storage network (i.e., Fibre Channel). Of course, SSDs can also be used as a cache on the array itself, for example, FAST Cache on CLARiiON and VNX arrays.
Though this approach has primarily (but not exclusively) been the province of smaller companies, large vendors (including EMC, but not limited to EMC) are poised to strike back. Which brings us to EMC's Project Lightning.
In essence, EMC's VFCache is 300 GB of SLC (Single-Level Cell) flash memory (not MLC (Multi-level Cell flash memory) on a PCIe (Peripheral Component Interconnect Express) card. Since PCIe is a popular computer expansion bus standard which supports hardware I/O virtualization, it seems a good place to put the flash memory (and also supports EMC's long-held strategy of integrating its entire solutions stack with VMware virtualization). EMC states that being more closely coupled to a server gives at least an order of magnitude improvement of performance over SSDs incorporated as disk drives within a storage array. Note that EMC feels that terabytes of data are not needed as in one competitive solution that is primarily direct-attached storage; VFCache is strictly a cache and can cater to a much larger application (remember the over 50 to 1 ration mentioned earlier). Note also that EMC feels that SLC's characteristics, such as endurance, make it a better choice at this time for enterprise-class solutions, although MLC may very well serve a role down the road.
VFCache is storage-agnostic which simply means that it can work with storage arrays of any sort, including those made by EMC's competitors. However, EMC can and will incorporate VFCache as a new tier of FAST storage where the hottest data resides on PCIe flash. This would, in effect, tightly couple VFCache to EMC storage, which is most likely the way that EMC will position the new solution to customers.
VFCache obviously works well where goosing performance over current levels (EMC states 50 percent improvement in response time and 210 percent improvement in throughput) provides an advantage for enterprises , including OLTP (online transaction processing systems) and business analytics. Naturally, your mileage may vary as performance improvement will vary depending on the workload characteristics. Given the critical nature of the former to enterprises of every sort and the growing use of the latter across multiple industries, VFCache should be a valuable addition to EMC's arsenal.
Note that flash cache is not just physical hardware; to maximize benefits, it has to be powered by sophisticated software for processes including examining I/Os. EMC's VFCache uses a lightweight I/O inspection technology (meaning that the CPU resources are impacted very little which EMC claims is a competitive advantage) that leverages some of the capabilities of the company's well-tested PowerPath software. Caching and cache management logic is also critical and EMC has had tons of experience in this area (as every storage array has a layer of cache that EMC has successfully polished and refined throughout the years).
One very interesting VFCache capability is called split card, which allows part of the flash cache to be treated strictly as cache and part as if it were DAS (direct-attached storage). That means that part of the data in the cache changes dynamically as different data is required for performance reasons to be put there. However, part of the cache could be used as if it were DAS to store ephemeral-like temporary databases in an SQL Server environment. (which means that this portion of cache is not subject to the algorithms that are responsible for moving data to and from that part of cache that acts as true cache). This capability gives IT more flexible choices in managing the overall cache.
As noted above, the second place flash cache is most often located is in the server-network. The reason for this is simple: Server-based cache is dedicated to a particular server (which means that each server requires its own committed PCIe flash cache card). While that may be optimal for some critical applications, there can also be a problem establishing the correct sizing of the flash cache on a server. If the application requires less than the minimum cache size to accelerate performance, the underutilized space is wasted as it cannot be shared with other servers; if the application requires more flash cache than can be provided as a maximum at the server level, the application suffers from not being able to achieve all the performance that it could have otherwise obtained.
Now this problem can occur even when application I/O demands are predictable over a very narrow band over time, i.e. more or less static. However, the problem can be further exacerbated when application I/O varies dynamically for a known period of time (such as the significant boost in OLTP during the December holiday season) or as the result of a special event (such as a product launch or promotion). Sharing cache in the network may not solve all the problems but it provides better workload optimization than non-shared server cache.
So it should come as no surprise that EMC has pre-announced (with Q2 2012 initial availability) a scalable server-network-based flash appliance that will couple together multiple VFCache cards. Now, the surprise is not what EMC is announcing, but that it pre-announced at all given that it does have a history of not doing public technology previews. One can assume a simple rationale in that by doing so the company's is alerting potential customers so they can incorporate EMC's direction into their planning process. While VFCache may prove to be very beneficial, wise IT organizations recognize that implementing new technologies into their IT infrastructure requires serious due diligence, which takes some time. And, oh by the way, EMC's pre-announcement also blunts potential arguments by server-network flash cache vendors.
What does this do to the competitive market? Well, some might say that EMC is validating the SSD market. Actually, while the announcement is a strong reaffirmation of the value of SSD, the market was already validated by EMC's solid success in these areas.
Narrow technical arguments aside, some competitors may differentiate themselves by their configuration (such as being able to share flash cache among multiple servers) or their market focus (such as accelerating big data adoption).
Still, the challenge for all these players is share of mind. EMC knows its customers and has access to them for a full-court press. And its installed base is likely to be where EMC will focus. The reason is that although VFCache is storage agnostic, most sales will be targeted (SSD storage specifically) rather than part of a general sales focus (such as storage arrays).
Along with the rest of the IT infrastructure, storage is in transition. Even though the per unit of SSD storage cost is higher than that of the fastest disk drives, prices are falling and SSD usage is increasing in popularity since the technology provides levels of performance that HDDs cannot easily or cost-effectively provide. Increasingly high performance (15K RPM) drives will be replaced by SSDs when performance is needed, while capacity (lower RPM speeds but higher density than 15K RPM drives) disks for random-access to less frequently accessed and performance demanding data will continue to be used.
The question remains of exactly where SSD storage, in the sense of flash storage, should be placed. The initial thought was that it would simply be used as another tier (tier 0) with a storage array, but it has become increasingly obvious that placing flash storage more strategically (for the majority of use cases) delivers better performance for the same investment. EMC's VFCache announcement confirms this point, as does the company's upcoming server-network-based flash appliance. The SSD market has already been a hot one and EMC's announcement is sure to make it even hotter.
EMC is a current client of David Hill and the Mesabi Group.
About the Author
You May Also Like