Market Snapshot
  • U.S.
  • Europe
  • Asia
Ticker Volume Price Price Delta
DJIA 16,408.54 -16.31 -0.10%
S&P 500 1,864.85 2.54 0.14%
NASDAQ 4,095.52 9.29 0.23%
Ticker Volume Price Price Delta
STOXX 50 3,155.81 16.55 0.53%
FTSE 100 6,625.25 41.08 0.62%
DAX 9,409.71 91.89 0.99%
Ticker Volume Price Price Delta
NIKKEI 14,512.38 -3.89 -0.03%
TOPIX 1,171.40 -1.97 -0.17%
HANG SENG 22,760.24 64.23 0.28%

Mellanox’s FDR InfiniBand Solution with NVIDIA GPUDirect RDMA Technology Provides Superior GPU-based Cluster Performance



  Mellanox’s FDR InfiniBand Solution with NVIDIA GPUDirect RDMA Technology
  Provides Superior GPU-based Cluster Performance

    Triples small message throughput and reduces MPI latency by 69 percent

International Supercomputing Conference 2013

Business Wire

LEIPZIG, Germany -- June 17, 2013

Mellanox® Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of
high-performance, end-to-end interconnect solutions for data center servers
and storage systems, today announced the next major advancement in GPU-to-GPU
communications with the launch of its FDR InfiniBand solution with support for
NVIDIA® GPUDirect™ remote direct memory access (RDMA) technology.

The next generation of NVIDIA GPUDirect technology provides industry-leading
application performance and efficiency for GPU-accelerator based
high-performance computing (HPC) clusters. NVIDIA GPUDirect RDMA technology
dramatically accelerates communications between GPUs by providing a direct
peer-to-peer communication data path between Mellanox’s scalable HPC adapters
and NVIDIA GPUs.

This capability provides a significant decrease in GPU-GPU communication
latency and completely offloads the CPU and system memory subsystem from all
GPU-GPU communications across the network. The latest performance results from
Ohio State University demonstrated MPI latency reduction of 69 percent, from
19.78us to 6.12us, when moving data between InfiniBand-connected GPUs, while
overall throughput for small messages increased by 3X and bandwidth
performance increased by 26 percent for larger messages.

“MPI applications with short and medium messages are expected to gain a lot of
performance benefits from Mellanox’s InfiniBand interconnect solutions and
NVIDIA GPUDirect RDMA technology,” said Professor Dhableswar K. (DK) Panda of
The Ohio State University.

The performance testing was done using MVAPICH2 software from The Ohio State
University’s Department of Computer Science and Engineering, which delivers
world-class performance, scalability and fault tolerance for high-end
computing systems and servers using InfiniBand. MVAPICH2 software powers
numerous supercomputers in the TOP500 list, including the 7^th largest
multi-Petaflop TACC Stampede system with 204,900 cores interconnected by
Mellanox FDR 56Gb/s InfiniBand.

“The ability to transfer data directly to and from GPU memory dramatically
speeds up system and application performance, enabling users to run
computationally intensive code and get answers faster than ever before,” said
Gilad Shainer, vice president of marketing at Mellanox Technologies.
“Mellanox’s FDR InfiniBand solutions with NVIDIA GPUDirect RDMA ensures the
highest level of application performance, scalability and efficiency for
GPU-based clusters.”

“Application scaling on clusters is often limited by an increase in sent
messages, at progressively smaller message sizes,” said Ian Buck, general
manager of GPU Computing Software at NVIDIA. “With MVAPICH2 and GPUDirect
RDMA, we see substantial improvements in small message latency and bisection
bandwidth between GPUs directly to Mellanox’s InfiniBand network fabric.”

GPU-based clusters are widely used for computationally-intensive tasks, such
as seismic processing, computation fluid dynamics and molecular dynamics.
Since the GPUs perform high-performance floating point operations over a very
large number of cores, a high-speed interconnect is required to connect
between the platforms to deliver the necessary bandwidth and latency for the
clustered GPUs to operate efficiently and alleviate any bottlenecks in the
GPU-to-GPU communication path.

Mellanox ConnectX and Connect-IB based adapters are the world’s only
InfiniBand solutions that provide full offloading capabilities critical to
avoiding CPU interrupts, data copies and systems noise, while maintaining high
efficiencies for GPU-based clusters. Combined with NVIDIA GPUDirect RDMA
technology, Mellanox InfiniBand solutions are driving HPC environments to new
levels of performance and scalability.

The alpha-code to enable functionality of GPUDirect RDMA is available today,
including the alpha version of MVAPICH2-GDR release from OSU to enable
existing MPI applications. General availability is expected in the 4^th
quarter of 2013. For more information please email hpc@mellanox.com.

Live demonstration during ISC’13 (June 17-19, 2013)

Visit Mellanox Technologies at ISC’13 (booth #326) during expo hours to see a
live demonstration of Mellanox’s FDR InfiniBand solutions with NVIDIA
GPUDirect RDMA, and the full suite of Mellanox’s end-to-end high-performance
InfiniBand and Ethernet solutions. For more information on Mellanox’s event
and speaking activities at ISC’13, please visit http://www.mellanox.com/isc13.

Supporting Resources:

  * Learn more about Mellanox InfiniBand Switches, Adapter Cards
  * Learn more about Mellanox Switch Management and Storage Fabric Software
  * Follow Mellanox on Twitter, Facebook, Google+, Linked-In, and YouTube
  * Join the Mellanox Community

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and
Ethernet interconnect solutions and services for servers and storage. Mellanox
interconnect solutions increase data center efficiency by providing the
highest throughput and lowest latency, delivering data faster to applications
and unlocking system performance capability. Mellanox offers a choice of fast
interconnect products: adapters, switches, software and silicon that
accelerate application runtime and maximize business results for a wide range
of markets including high performance computing, enterprise data centers, Web
2.0, cloud, storage and financial services. More information is available at
www.mellanox.com.

Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost,
InfiniScale, MLNX-OS, PhyX, SwitchX, Virtual Protocol Interconnect and
Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB,
CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined
Storage, MetroX, MetroDX, Mellanox Open Ethernet, Mellanox Virtual Modular
Switch, Open Ethernet, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric
Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are
property of their respective owners.

Contact:

Mellanox Technologies, Ltd.
Press/Media
Waggener Edstrom
Ashley Paula, +1-415-547-7024
apaula@waggeneredstrom.com
or
USA Investors
Mellanox Technologies
Gwyn Lauber, +1-408-916-0012
gwyn@mellanox.com
or
Israel Investors
Gelbart Kahana Investor Relations
Nava Ladin, +972-3-6074717
nava@gk-biz.com
Sponsored Links
Advertisement
Advertisements
Sponsored Links
Advertisement