40GB Infiniband Demonstration At The International Supercomputing Conference

The HPC Advisory Council is demonstrating a multi-vendor 40GB Infiniband network at the International Supercomputing Conference '09.

June 23, 2009

2 Min Read
NetworkComputing logo in a gray background | NetworkComputing

The High Performance Computing Advisory Council, aninternational group of vendors and research labs dedicated to researching and promotingHPC use, is demonstrating a multi-vendor 40GB Infiniband network integratingcomputing, graphic processor units, and networking to show interoperability at the International Supercomputing Conference '09.HPC is used by researcher organization such as Lawrence Livermore NationalLaboratory and Swiss National Supercomputing Centre CSCS and universities like CornellUniversity Center for Advanced Computing and Ohio State University.

Infiniband is used in HPC because it is high speed, upto96Gb/s, serial connection between two nodes with latencies below onemillisecond. Infiniband switches interconnect nodes in a switched fabric withmultiple paths between nodes. The switch fabric avoids congestion and ensuresthat the full bandwidth can be utilized when needed.  Besides HPC applications, Infiniband can alsobe used for IO virtualization by interconnecting the computer memory bus viahost channel adapters to the Infiniband network. Target  channel adapters interconnect  to IO modules like Ethernet NIC's and storagecontrollers. In addition RAM can be pooled and shared.

Two key demonstrations are application sharing graphicprocessors for distributed modeling and simulation and remote HPC desktop. Complexanalysis such as financial modeling and geographic simulations requires specialprocessors to complete their tasks. The demonstration shows an applicationleveraging graphic processor units (GPU)s in many servers to complete asimulation. Rather than running a computation and storing the results in adatabase, GPU's and interact directly significantly increasing processing time.The other demonstration is using remote desktop to allow  a high powered computer run three dimensionalgraphics and send the results in real time to multiple desktops in highdefinition.

For the demonstration, Mellanox is providing their IS5035 36port Infiniband edge switches to the demonstration participants and theirMTS3610, a 324 20 and 40 Gb Infiniband switch with 51.8 Tbs/s switching fabricwith 100-300 nano second latency. The HPC Advisory Council is also releasing acase study on the Juelich Research on Petaflop Architectures (JUROPA) in JulichGermany, a 300 teraflop/s HPC consisting of 3288 compute nodes, 76 TB ofmemory, and 26,304  cores using Sun bladeservers, Intel Nehalem processors, and cluster operation software b ParTec.Mellanox provided the 40Gb/s Infiniband networking which achieves a 92%computing efficiency, compared to 50% for Ethernet, 70% for 10Gb, and 80% for20Gb.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights