Sun Hedges On Infiniband As System Interconnect

Sun Microsystems is now hedging on its plans to standardize on Infiniband as a system interconnect, suggesting it may embrace new versions of Ethernet and PCI Express as well as

March 11, 2004

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

San Jose, Calif. -- Opportunities for server blades are growing fast, but so are issues like managing, cooling and picking the right interconnects for this emerging category of densely packed systems, said computer executives at the Server Blade Summit here Wednesday.

Separately, Sun Microsystems is now hedging on its plans to standardize on Infiniband as a system interconnect, suggesting it may embrace new versions of Ethernet and PCI Express as well as Infiniband.

While traditional PC towers still represent half the server business, blades - essentially shelves of server cards stacked in a 19-inch rack - are on track to grow as fast as rack-mounted systems did in the late 1990s. Blades could become a $2.3 billion business totaling 800,000 units a year by 2005, James Mouton, vice president of the platform division of Hewlett-Packard's x86 server group, said in his keynote address.

Mouton and a counterpart from Sun agreed blades are set to take on all three tiers of data center jobs, handling Web traffic, applications and database transactions. Some users are experimenting with blades as an engine to drive thin-client computers, replacing traditional business desktops, said Mouton.

At the high-end, blades will surpass mainframe performance while offering PC-like flexibility, said Frank Schwartz, a blades specialist working for the chief technology office of Sun's volume server division. "We are talking about a chassis of blades that will handle what it takes two to three mainframes to do today. It's going to turn a lot of things upside down. It will be a huge, huge shift," he said.Over the next two years, server blades will pack two to four CPUs - some with multiple cores - on each card along with 16 to 128 Gbytes main memory and 10 to 12 Gbit/s interconnects, said Schwartz. "This is very, very competitive with the mainframe," he said, suggesting Sun will use such designs to attack the small but lucrative high-end markets where IBM Corp. now holds sway.

But OEMs have yet to solve the power and heat problems the dense systems bring. Next-generation blades using up to 96 two-processor cards could require 55 kW of power and generate 188,000 BTUs/hour, said HP's Mouton. Some users are already seeing air conditioning costs rival costs of the computers themselves, he added.

For its part, Sun estimates blades based on 32-bit CPUs will require 12kW of power and upcoming versions with 64-bit CPUs will need 22 kW.

That's a problem because many data centers today are specified to handle just 20 kW per square foot, forcing users to leave lots of extra space around the server blades.

"You pack more density in a rack, but then you loose it in the data center. This is a major issue and many manufacturers are looking at how to handle it," Schwartz said. "I think you will see a lot of activity by computer makers to remove the heat more efficiently," he added.Software opportunities and issues for blades rival their hardware cousins.

"The need for blade-like management is larger than the blade market itself . . . but today you have to talk to every chassis differently, even if it's from the same manufacturer. The command-line interfaces are quite esoteric," said Schwartz.

On Tuesday (March 9) the ad hoc Distributed Management Task Force (DMTF) agreed to collaborate with the Blade Systems Alliance to tackle the problem. The blade alliance will write guidelines for testing interoperability of the Common Information Model (CIM) interface the DMTF is developing.

A Phase 1 CIM interface should be available by July. However it's not clear whether it will include a key profiling feature, said Schwartz. The feature would define a way hardware devices can write XML scripts to identify themselves to management applications so the applications can automatically manage the hardware.

"The good news is the standards work is starting. The bad news is its still at a very low level and has quite a ways to go," said Schwartz. "Until now everyone has been firing off in various proprietary directions," he added.Schwartz gave one of Sun's strongest statements to date suggesting it is opening the door to supporting multiple interconnects beyond Infiniband. Sun had been one of the largest computer makers to commit to Infiniband across a range of server and storage systems.

He said Sun will use Infiniband in server blades for its low latency and up to 12 Gbits/s in bandwidth. But he suggested the company will also use a new version of 10Gbit Ethernet in the works, and is considering variations on PCI Express.

Many vendors are building the remote direct memory access (RDMA) capabilities of Infiniband into Ethernet chips, along with TCP offload and iSCSI features. Others are developing a so-called Advanced Switching version of PCI Express for use primarily in telecom systems, although a streamlined version of AS without some of the telecom features is also in the works, Schwartz said.

"There are no clear winners in the near term," he said. "Technically [Infiniband and 10Gbit Ethernet with RDMA] are very similar, and costs will drive it," he added.

Read more about:

2004
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights