The New Storage Bottlenecks
To eliminate performance bottlenecks, you have to look past the solid state disk and consider the system as a whole.
February 10, 2012
Recently, I discussed some of the reasons that you may want to look at server-based tiering and some reasons why you may not. In that entry we discussed how a server-based tier or cache of solid state disk (SSD) can get around storage network performance problems. Now we will look at how an SSD tier in the server can overcome a storage performance bottleneck, even if the shared storage has solid state technology already on it.
Assuming that you meet the requirements for a solid-state storage system, as we discuss in our Visualizing SSD Performance white paper, solving storage system problems involves two parts: first, integrating solid state into the storage system, and second, making sure that the storage system itself is not a bottleneck to maximum solid state performance.
Integrating SSD into the storage system is mostly done today by using solid state storage in the form of a hard drive. This allows SSD vendors to install solid state into their current disk shelves and quickly bring a solid state solution to market. Other vendors have either leveraged PCIe SSD directly inside of the storage system or have developed standalone appliances or storage systems that are solid-state only.
For the most part, the type of SSD that you use in the storage system does not in a significant way impact the performance that you should expect from that tier of storage. The only challenges that form-factor SSDs have are size and power disadvantages vs. other purpose-built designs that look more like memory modules than drives. A flash chip does not need the same volume of space that a HDD needs, does nor does it need the same amount of power. The cost to get to market quickly is a loss of that space and power efficiency.
The real performance challenge for vendors looking to integrate SSD into their storage system is making sure that the system does not become the bottleneck. If you think about it, a storage system is really a complex of servers, networking, and data storage devices. The servers are commonly called controllers, the network is the connectivity from that server to the storage devices and from that server to the attaching hosts, and the storage devices are the hard drives or SSDs. The performance of the other two parts (the controller and the network) are critical to achieving maximum performance.
The inbound data flow to the controller from the host, as well as its ability to pull data from the storage devices, directly impact performance. In the past, with mechanical hard drives, there was enough latency in these structures that the performance of the controller and its networking went largely unnoticed. It was always the hard drive's fault.
Solid state changes that. There is no latency and seldom is the data storage device at fault. The issue becomes the network from the controller to the devices, and the network to the attaching hosts. Even if you upgrade to the fastest network available, those dozens of hosts that are planning on accessing shared storage all bottleneck at the connection points in the storage controller. A key concern is whether the storage controller can process those storage I/O requests and read or write them from/to the storage media. We have seen repeatedly where the storage controllers become flooded by these operations. This leads to multiple storage system purchases and to limiting the number of SSDs per storage shelf.
The way around this? Either build a storage system that can handle solid state performance, which probably means designing a new storage backend infrastructure, OR use a server tier to offload as much of the I/O as possible, by using a server based SSD tier, hiding the potential bottleneck that the storage controller may be.
Nothing is wrong with the server-based tiering approach, and it is certainly something to consider as you are looking at improving performance of existing storage systems. However as you consider new systems, if you are going to be counting on solid state performance, you may also want to consider ones that can deliver the performance that SSD and high-speed networks promise without the need for a server-based workaround.
Follow Storage Switzerland on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
Please join us on Feb. 15 for the InformationWeek & Dark Reading virtual event Clouds, Outsourcing, And Security Services: Making Providers Part of Your IT Security Strategy. When you attend, you will be able to access live and on-demand webcast presentations as well as virtual booths packed with free resources, and you can also be eligible to win great prizes! (Free registration required.)
About the Author
You May Also Like