Getting Real About SSD Performance
Solid state storage has moved beyond the learning stage and storage managers are trying to decide how to best use the new technology.
April 8, 2011
Solid state storage has moved beyond the learning stage and storage managers are trying to decide how to best use the new technology. Unlike taking a chance on a new less expensive disk system, the risk is greater with solid state storage. It probably costs more than your current system, so it better live up to its promise by performing better and being more reliable. Users are looking for solutions to enable those promises.
As we discussed on our recent webinar and with our briefings with solid state vendors at Storage Networking World, the reliability issue has been handled or at least it can be managed as long as you are working with a vendor who understands the technology. Clearly the work on reliability will never stop. As NAND flash advances so does the error rate, vendors need to keep getting better at accounting for those errors.
By selecting the right vendors, good research and proper testing you can give the reliability of solid state storage a confident checkmark. Where I get concerned and what I hear constantly from users is "how do I get maximum performance out of the solid state technology?" or "how do I get great performance without having to re-do my entire infrastructure or re-write my application?" Those are great questions and they are at this time the hardest to answer.
My first advice is to resist for as long as you can any solution that forces you to do major rewrites to application code. I'm not saying that at some point you won't have to do that but for many, many users now is not the time. There are scale up solutions that should prevent you from having to do this. By scale up, I mean move the application to a faster processor with faster storage in the form of solid state before you look at complicated application clusters or database sharding work arounds. Simple is always better. A finite number of changes instead of a massive overhaul will be easier to buy, implement, and operate. You may be surprised just how much performance this approach can deliver today.
Second, get real about what your performance demands are. Honestly, I wish vendors would stop already with the 10 billion I/Os per second (IOPS) benchmarks. It is great that we can do those numbers and I sure we all feel very proud as we thump our chests in victory but I'm not sure they do the users much good. These configurations cost more than most data centers! I want to see how cheaply and simply you can get a user to 100,000 IOPS on configurations they will actually use and be able to afford. The high end tends have their place. We do learn a lot that trickles down when stretching the limits. In reality we need both. How fast can you go no matter the cost AND how fast can you go with a $150k investment?
My advice to storage managers is to only improve performance so it meets this year's demands. Performance is a continuum that ranges from the slow to the incredibly fast and that continuum will constantly get stretched further and further out. The problem is that the further down that continuum you go toward high performance the more expensive and more complex the environment becomes. Reality is that many applications and data centers don't need to go very far down that continuum at all. If you need 100,000 IOPS don't invest in a 500,000 IOPS solution, the 400,000 IOPS will go to waste. Plus, it is no secret that in three years when you actually do need 500,000 IOPS you'll be able to get it for a lot less money and probably a lot less complexity.
There are ways to improve performance without breaking the budget and investing in a whole new application architecture. This can be done whether you have a specific application problem or want to provide a performance boost to a broad range of servers and users. In the next entry we will look at cost effective ways to improve performance without having to inflict massive changes on the environment, in short we will be discussing SSD Enablement.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland'sdisclosure statement here.
Read more about:
2011About the Author
You May Also Like