Interconnect Wars: InfiniBand Fights Back

A wave of new products and a branding campaign by the InfiniBand Trade Association should raise the profile of the high-speed interconnect technology

November 21, 2008

6 Min Read
NetworkComputing logo in a gray background | NetworkComputing

When it comes to storage industry interconnects, much of the focus lately on been on the battle between Fibre Channel and iSCSI, the development of new technologies like Fibre Channel-over-Ethernet, or Cisco's plans for Data Center Ethernet. Much less attention has been given to the high-speed interconnect known as InfiniBand, and proponents of the technology are tired of being neglected.

This week, InfiniBand vendors launched a batch of new products and a marketing counteroffensive at the Supercomputing 2008 show in Austin, Texas, to convince businesses that the high-performance interconnect deserves a place in the enterprise data center.

"InfiniBand is all about performance and all about interoperability," says Brian Sparks, director of marketing communications at Mellanox Technologies Ltd. (Nasdaq: MLNX) and co-chair of the marketing working group at the InfiniBand Trade Association (IBTA). "If you look at the top 500 supercomputers, InfiniBand has grown from 30 in the top 500 in 2005 to 142 today. That's more than 25 percent. And supercomputers that use proprietary interconnects have dwindled."

The reason, he says, is performance. The new quad data rate (QDR) InfiniBand can hit data rates of 40 Gbit/s node-to-node and are expected to achieve speeds of 80 Gbit/s by 2011. "We can see that eventually going to 160 Gbit/s," he says. And the technology can achieve switch-to-switch speeds of 120 Gbit/s.

That's one reason analysts continue to predict strong growth for the technology. IDC forecasts a compounded annual growth rate for host channel adapter revenues of 35 percent, with sales hitting $279.7 million in 2011. The growth rate for InfiniBand switch revenues is forecast at 47.2 percent, hitting $656.3 million in the same year. In a March report, IDC predicted that InfiniBand will expand out of the high-performance computing market into mainstream data centers.The Taneja Group says InfiniBand, well known as a leader in high-performance computing, "is gaining increasing adoption in general purpose enterprise computing" because of its cost effectiveness, simplicity, flexibility, and performance. In an April report, Taneja said a preliminary look suggests InfiniBand's "growth trajectory may be on par with the fastest growing storage systems in the market," approaching a growth rate of 60 to 70 percent. It concluded that "selective use of InfiniBand can go a long ways in reinventing the enterprise infrastructure and creating a data center with second generation capabilities."

InfiniBand will have a role even if Ethernet dominates the world of networks, predicted Freesky Research in a report issued this week on 40 Gbit/s and 100 Gbit/s networks. In fact, the most cost-efficient, high-speed, local, storage, and wide-area networks are embracing multiple data link protocols, the research firm said. "The defining economic characteristic of sub-gigabit networks was framing, while the defining economic characteristic of multi-gigabit networks is clocking," David Gross, author of the report, said in a statement. "Therefore, in 40 and 100 Gigabit networks, Ethernet will frequently interconnect with Fibre Channel, InfiniBand, even Sonet, and will not be able to kill off those protocols the way it decimated Token Ring, FDDI, and ATM."

The IBTA isn't worried about Ethernet as InfiniBand begins to show up in more deployments. InfiniBand adoption was slow at first, Sparks acknowledges, for a couple of reasons. One, the link speed was so fast that bus speeds couldn't keep up. That bottleneck was removed with the PCI Express interface. An upgraded version coming available now offers even more speed. Privately held Aprius Inc., for example, this week demonstrated a PCI Express interconnect over fiber optics that can provide 80 Gbit/s of bandwidth between a host server and a target system.

Also, multiple vendors were working on different software stacks and management frameworks, which scared away customers seeking a standard. That changed once there was a standard for hardware, software, cabling, and management, and vendors agreed to open-source the entire software stack. The stack is up to version 1.4 and has been "hardened," so interoperability is working much better, Sparks says. Major vendors like IBM Corp. (NYSE: IBM), Hewlett-Packard Co. (NYSE: HPQ), Dell Inc. (Nasdaq: DELL), and Sun Microsystems Inc. (Nasdaq: JAVA) have played a role in standardizing the technology. In addition, The OpenFabrics Alliance is promoting a Linux-based software stack and a cross-platform one that include Windows. Another potential driver for InfiniBand adoption is that VMware Inc. (NYSE: VMW) has incorporated the software stack in its ESX Server products.

Other vendors also are pushing the technology to new milestones. Mellanox Technologies Ltd. (Nasdaq: MLNX) and Dell this week demonstrated the first 40-Gbit/s interconnect for blade servers by combining Mellanox's InfiniBand ConnectX adapter and InfiniScale IV switch products in a Dell PowerEdge M1000e-series Blade Enclosure at the supercomputer show.Obsidian Strategics Inc. this week said NASA is building a transcontinental encrypted InfiniBand link using the vendor's new Longbow E products to connect NASA Ames Research Center in California with NASA Goddard in Maryland. The Longbow E products, which support range-extended InfiniBand, inter-subnet routing, and open-standards encryption engines, will be commercially available next year if the trials with NASA are successful.

The trade association also tried to knock down the impression that InfiniBand was too pricey for mainstream deployment. While InfiniBand has been mainly used for very high-end computing systems, the IBTA said this week the technology beats out Gigabit Ethernet and 10-Gigabit Ethernet on a price-performance basis. Even though the prices for InfiniBand adapter cards and switch ports are higher, the throughput for 10-Gbit/s InfiniBand means the price per Gbit/s is $25 for InfiniBand compared to $85 for 10-Gigabit Ethernet and $140 for 4-Gbit/s Fibre Channel.

The IBTA, which reported a 15 percent growth in membership to more than 40 companies, introduced a branding campaign that includes "certified logos" to help buyers and users of the technology know what products have been approved after participating in a compliance and interoperability "plugfest" that was conducted in September. The association expects vendors, especially cabling vendors, to use the logos on products to help customers select the right cable for the right application. That makes sense, given that their single-, double-, and quad-data rate cables are designed for speeds of 10 Gbit/s, 20 Gbit/s, 30 Gbit/s, 40 Gbit/s, up to 120 Gbit/s. "The idea is to make it easy for users to instantly see that a specific cable has been certified and for what speeds," Sparks says.

The association also plans to boost its educational and outreach programs, with plans to conduct two- and three-day seminars at industry tradeshows.

Sparks predicts that InfiniBand will continue to enjoy strong adoption and will fare well against other interconnect technologies. "We are way ahead of the game and have a solid roadmap for the future," he said. "InfiniBand will be the first to pass 100 Gbit/s, and that will appeal to those who want performance now."0

Read more about:

2008
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights