What's Next In Storage Interfaces
Storage infrastructures need the ability to handle more random I/O, deliver higher IOPS performance, and take away some of the load from the primary CPU.
April 26, 2011
Thanks to server virtualization and the increasing criticality of databases, the requirements of the storage infrastructure have changed dramatically over the past couple of years. Storage infrastructures need the ability to handle more random I/O, deliver higher IOPS performance, and take away some of the load from the primary CPU. The way your server connects to the storage network can play a huge factor in achieving maximum performance and maximum utilization.
When looking to improve the infrastructure, the most obvious place to look is how fast is that infrastructure? As a famous football coach once said, "You can't teach speed." Many data centers are deploying 8-Gbit fibre cards and 10-Gbit Ethernet cards. In most companies, infrastructure upgrades are a gradual migration and not an overnight swap out. Deployment of 16-Gbit Fibre Channel is getting closer to being a reality with production cards becoming available either at the end of this year or early next year.
The more available bandwidth you have, assuming your environment can take advantage of it, the easier it is to solve performance problems. Speed, though, has to be almost universally applied. Certainly the connection between the host and the switch port have to be at the same speed. Storage can be a little different though. Storage systems with multiple I/O paths to the switch may be able to be lower bandwidth if the storage system properly aggregates those connections. As is the case with most aggregation technologies, there is often a case of diminishing returns and eventually you want high bandwidth going to your storage and to the switch as well.
Performance is more than just high bandwidth. It is also how intelligently can that card use that bandwidth? A good example of this is the quality of service (QoS) capabilities that we are seeing in today's interface cards. Similar QoS capabilities at the switch-level cards will allow you to set priorities on the card. Ideally used in a virtualized server environment, these cards can provide more bandwidth to more mission critical servers when they need it. This allows you to better maintain SLAs for those mission critical applications.
Another capability that is emerging in interface cards is card-level virtual switching. This means that if two virtual machines (VMs) on a single physical host need to communicate with each other, that communication does not need to go out of the host to the network switch and can stay on the virtual switch on the card in the host. This offloads some of the responsibility of the hypervisor and it offloads traffic from the network.
When it comes to off-loading the hypervisor, we see interface cards ready to help. First, in the IP storage world there is the obvious help of having the IP conversation done right on the card. While it seems like most physical servers have enough processing power to handle a SCSI to IP conversion, having it done in hardware does bring a greater level of predictability to the environment, especially during peak load times.
One of the other challenges is dealing with communication in the physical host, which is all interrupt driven. A processing core must be interrupted to examine an inbound packet and then it has to interrupt the core, which has the VM that the packet was intended for. This process, when done on an active host, can greatly reduce bandwidth efficiency. There are a growing number of cards that support one of several standards that allow better communication with the hypervisor and the virtual machines. Essentially they allow a packet to be tagged for a specific virtual machine. This makes the hypervisor and the VM more efficient because the packet can be sent directly to the intended virtual machine without the extra interrupts.
The next generation of network interface cards is going be about a lot more than speed and convergence. It also is going to be about intelligence. More intelligent cards will allow for not only higher percentages of bandwidth utilization but also increased efficiencies of the hypervisor and virtual host processing resources.
Follow Storage Switzerland on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
About the Author
You May Also Like