Mellanox Unveils Adapters

New Mellanox ConnectX IB adapters unleash multi-core processor performance

March 26, 2007

2 Min Read
NetworkComputing logo in a gray background | NetworkComputing

SANTA CLARA, Calif. -- Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of semiconductor-based high-performance interconnect products, today announced the availability of the industry’s only 10 and 20Gb/s InfiniBand I/O adapters that deliver ultra-low 1 microsecond (µs) application latencies. The ConnectX IB fourth-generation InfiniBand Host Channel Adapters (HCAs) provide unparalleled I/O connectivity performance for servers, storage, and embedded systems optimized for high throughput and latency-sensitive clusters, grids and virtualized environments.

“Today’s servers integrate multiple dual and quad-core processors with high bandwidth memory subsystems, yet the I/O limitations of Gigabit Ethernet and Fibre Channel effectively degrades the system’s overall performance,” said Eyal Waldman, chairman, president and CEO of Mellanox Technologies. “ConnectX IB 10 and 20Gb/s InfiniBand adapters balance I/O performance with powerful multi-core processors responsible for executing mission-critical functions that range from applications which optimize Fortune 500 business operations to those that enable the discovery of new disease treatments through medical and drug research.”

Building on the success of the widely deployed Mellanox InfiniHost adapter products, ConnectX IB HCAs extend InfiniBand’s value with new performance levels and capabilities.

  • Leading performance: Industry’s only 10 and 20Gb/s I/O adapters with ultra-low 1µs RDMA write latency and 1.2µs MPI ping latency1, and a high uni-directional MPI message rate of 25 million messages-per-second2. The InfiniBand ports connect to the host processor through a PCI Express x8 interface.

  • Extended network processing offload and optimized traffic and fabric management: New capabilities including hardware reliable multicast, enhanced atomic operations, hardware-based congestion control and granular quality of service.

  • Increased TCP/IP application performance: Integrated stateless-offload engines alleviate the host processor from compute-intensive protocol stack processing which optimizes application execution efficiency.

  • Higher Scalability: Scalable and reliable connected transport services and shared receive queues enhance scalability of high-performance applications to tens of thousands of nodes.

  • Hardware-based I/O virtualization: Support for virtual services end-points, virtual address translation/DMA remapping, isolation and protection per virtual machine, and facilitating native InfiniBand performance to applications running in virtual servers for EDC agility and service oriented architectures (SOA).

    Leading OEM Support“Our high-performance BladeSystem c-Class customer applications are increasingly relying on lower interconnect latency to improve performance and keep costs in check,” said Mark Potter, vice president of the BladeSystem Division at HP. “With the promise of even better application latency, HP's c-Class blades featuring the forthcoming Mellanox ConnectX IB HCAs will further enhance HP's industry-leading 4X DDR InfiniBand capability, bringing new dimensions to how Fortune 500 companies deploy clusters and improve ROI.”

Mellanox Technologies Ltd.

Read more about:

2007
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights