Mellanox Announces InfiniScale IV

Mellanox InfiniScale IV switch architecture provides massively scaleable 40-Gbit/s server and storage connectivity

November 13, 2007

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

RENO, Nev. -- Mellanox Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of semiconductor-based, server and storage interconnect products, announced the InfiniScale IV silicon switch architecture which further extends InfiniBand’s leadership in bandwidth, latency, scalability and optimized data traffic management. InfiniScale IV builds on the success of previous InfiniScale switch products that have been deployed in data centers containing approximately 2 million 10, 20, 30 and 60Gb/s InfiniBand silicon ports. New switch systems based on the InfiniScale IV architecture supporting up to 40Gb/s per port and 120Gb/s for inter-switch links are expected to be available in the latter part of 2008 from several leading server and infrastructure system OEMs. InfiniScale IV products will continue to fuel the fast growing InfiniBand switch system market, which IDC estimates has a port shipment CAGR of 53% over 2006 to 2011*.

“As thousand-node server and storage clusters are becoming mainstream business and research tools, we believe it is critical to provide the most scalable switch infrastructure building blocks that deliver the highest throughput, lowest switch hop latency and highly-efficient hardware-based traffic management capabilities,” said Eyal Waldman, chairman, president and CEO of Mellanox Technologies. “The InfiniScale IV architecture offers next generation I/O performance that properly scales with multi-core CPU systems demanded by enterprise and high performance computing applications including database, design automation, financial services, grids, health services, media creation, oil and gas, virtualization, weather analysis, web services, and more.”

InfiniScale IV architecture benefits include:

• 40Gb/s server and storage interconnect -- Rapid advances in server architecture, including multi-core CPUs, faster internal buses, and increased utilization due to virtualization, have driven the need for higher I/O speeds. Servers are now shipping with the PCI Express Gen2 bus specification which provides 40Gb/s bandwidth on an x8 connection – a perfect match between Mellanox InfiniScale IV switching and upcoming 40Gb/s ConnectX IB adapters.

• 120Gb/s switch-to-switch interconnect -- InfiniBand users can enjoy 120Gb/s switch to switch bandwidth as early as the end of 2008 (years ahead of other industry initiatives to provide similar levels of bandwidth), using a variety of cabling methods. This link can be used to consolidate multiple cables into a few high-speed links when building large, non-blocking fabrics, simplifying management while reducing cost and complexity.• 60 nanosecond switch hop latency -- In 2007 Mellanox began shipping ConnectX IB adapters which deliver 1 microsecond application to application latency. Faster switching through the fabric is now an even more important component of total latency – especially since typical InfiniBand fabrics include 5 or more hops through multiple switch silicon devices.

• 36-port switch devices for optimal scalability -- The InfiniScale IV architecture will be used to build 36-port switch devices. This allows InfiniBand switch designers to create switching networks with fewer hops, further reducing the end-to-end latency. For example, a fully non-blocking 648-port switch fabric can be designed with a maximum of 3 switch hops (as opposed to 5 hops required with 24-port switch devices).

• Adaptive Routing to optimize data traffic flow – A key fabric differentiator of InfiniBand is the use of multiple paths between any two points (all of which can be used unlike Ethernet with Spanning Tree Protocol limitations). When unexpected traffic patterns cause paths to be overloaded, Adaptive Routing in the new architecture can automatically move traffic to less congested paths.

• Congestion control to avoid hot spots – Congestion control is a complimentary hardware mechanism to adaptive routing which optimizes data rates at the source to most efficiently utilize the full bandwidth of the fabric while avoiding traffic contention scenarios.

“Products built utilizing the InfiniScale IV architecture will enable computing systems tackling complex and challenging workloads to scale to higher performance levels,” said Jie Wu, Research Manager for IDC's Technical Computing Systems program. “This is becoming increasingly important in a number of markets including technical computing and certain enterprise arenas, where applications are more sensitive to I/O bandwidth and latency."Mellanox Technologies Ltd.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights