The Need for Card Based QoS

With server virtualization hitting full stride a key theme of phase two of these projects is increasing virtual machine density. There are many aspects of increasing virtual machine density but one that is often over looked is card (network or storage) based QoS.

George Crump

September 9, 2009

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

With server virtualization hitting full stride a key theme of phase two of these projects is increasing virtual machine density. There are many aspects of increasing virtual machine density but one that is often over looked is card (network or storage) based QoS.

QoS has long been around in the network infrastructure but only recently have we started to see it appear at a card level. The need for Network Interface Card (NIC) QoS or Host Bus Adapter (HBA) QoS is being driven by two changes in the environment. The first is server virtualization itself. Instead of the old days where there was a 1:1 ratio between workload and interface card, now there can be 10, 20 or more workloads per card. If even one of those workloads start to get busy then it may starve the other virtual machines for resources. This probability, no matter how remote, is going to give some application owners the excuse they need to keep their application on a stand alone server.

The second reason is the increase in overall bandwidth. We are getting ready to make big jumps from 1GB Ethernet to 10GB and from 4GB fibre to either 8GB fibre or 10GB FCoE. The problem with these upcoming speed boasts is that there are a finite number of single workload servers that can truly take advantage of the addition bandwidth. While the various hypervisors do an admirable job parsing out that bandwidth to their virtual machines, there is plenty of room for improvement. Also remember that upgrades in speed won't stop here, 40GB Ethernet and 16GB fibre are both on the way. As cards increase in speed, the need to be more intelligent in managing and optimizing this bandwidth becomes critical.

QoS like functionality for cards is here now. Companies like SolarFlare and Neterion have high speed IP cards that can be divided up into channels. For example with these cards you could create 10 separate channels and have the card appear to the hypervisor as 10 physical 1GB cards, each assigned to its own VM or groups of VMs. These cards work great for the IP side of the host server and for those environments that are using NFS for virtual machine storage. 

For block storage, especially fibre, NPIV can be used to help with virtual machine identification. NPIV, also known as N_Port ID Virtualization, as we discuss in our article "Using NPIV to Optimize Server Virtualization's Storage" is a capability that is unique to fibre channel SANs which allows virtual HBA assignment to virtual machines on the host. Untapped, its value is giving you the ability to drill down into a fibre switch and understand storage traffic from the virtual machine perspective.

Companies like Brocade are leveraging and extending NPIV to offer something closer to real QoS by prioritizing the storage I/O coming out of a host server. By extending extra buffer credits to higher priority workloads, these solutions provide much of the same benefits as traditional QoS. 

Finally in the proposed Converged Enhanced Ethernet (CEE) standard there is the foundational capabilities to unify this bandwidth prioritization under the FCoE flag. As we discussed back in March in our article for Information Week, "Why 'Unified' Is The Hot New Idea For Data Centers" CEE has Enhanced Transmission Selection (ETS). This essentially lays the framework for a QoS type of function allowing for designation of priorities to certain types of traffic, for example allocating 60% of the bandwidth to storage and 30% of the bandwidth to standard IP traffic. This could be further granulized to lock certain VMs to exact amounts of bandwidth with the remaining VMs sharing what's left. 

In the end card based QoS allows network, storage and virtual server managers to provide application owners with an SLA based on guaranteed performance and with it removes one of the more legitimate objections to virtualizing mission critical applications.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights