Mellanox InfiniBand Solutions Enable New Levels of Research at Brookhaven National Laboratory

  Mellanox InfiniBand Solutions Enable New Levels of Research at Brookhaven
  National Laboratory

Multidisciplinary research institution Brookhaven National Laboratory utilizes
   Mellanox FDR 56Gb/s InfiniBand to build the most effective 100Gbp/s RDMA
                          Server and Storage Network

SC12

Business Wire

SALT LAKE CITY -- November 12, 2012

SC12--Mellanox® Technologies, Ltd. (NASDAQ: MLNX) (TASE: MLNX), a leading
supplier of high-performance, end-to-end interconnect solutions for data
center servers and storage systems, today announced that the U.S. Department
of Energy’s Brookhaven National Laboratory has deployed Mellanox FDR 56Gbp/s
InfiniBand with RDMA to build a cost-effective and scalable 100Gb/s network
for compute and storage connectivity. Some of the key research being conducted
currently at Brookhaven National Laboratory includes system biology to advance
the fundamental knowledge underlying biological approaches to producing
biofuels, sequestering carbon in terrestrial ecosystems, advanced energy
systems research and nuclear/high-energy physics experiments to explore the
most fundamental questions about the nature of the universe.

“Researchers at Brookhaven National Laboratory rely on data-intensive
applications that require high speed (throughput) accesses to data storage
systems,” said Dantong Yu, research engineer at Brookhaven National
Laboratory. “Scientists often need to read and write data in an aggregated
speed of 10Gbps, 100Gbps and beyond, which is equivalent to fetching a
full-length HD movie in less than a second. The efficiency and scalability of
Mellanox InfiniBand solutions with RDMA should help us eliminate bottlenecks
on the interconnection between servers and storage, while also controlling
processing cost and latency. Faster access to data enables us to move our
research forward more quickly.”

One of ten national laboratories overseen and primarily funded by the Office
of Science of the U.S. Department of Energy (DOE), Brookhaven National
Laboratory conducts research in the physical, biomedical and environmental
sciences, as well as in energy technologies and national security.

Brookhaven National Laboratory constructed a storage area network (SAN)
testbed utilizing iSCSI Extensions for RDMA (iSER) protocols over Mellanox
InfiniBand-based storage interconnects with RDMA. This storage solution is
scalable to allow a large number of cluster/cloud hosts to have unrestricted
access to virtualized storage and enable gateway hosts, such as FTP and web
servers, to move data between client and storage with an extremely high speed.
Combined with its front-end network interface, the upgraded SAN will eliminate
bottlenecks and deliver 100Gb/s end-to-end data transfer throughput to support
applications that constantly need to move large amounts of data within and
across Brookhaven’s data centers.

“National research labs, such as Brookhaven National Laboratory, require
extremely fast data access for their applications in order to conduct their
research more effectively,” said Gilad Shainer, vice president of market
development at Mellanox. “Mellanox InfiniBand and RDMA solutions provide the
most efficient and scalable interconnect infrastructure to enable Brookhaven
National Laboratory to increase their application performance and achieve
their research goals.”

Visit Mellanox Technologies & Brookhaven National Laboratory at SC12 (November
12-15, 2012)

Visit Mellanox Technologies at booth #1531 on Wednesday, November 14^th at
11:45am, to see Brookhaven National Laboratory’s live demonstration of its
high speed data transfer network.

Supporting Resources:

  *Learn more about Mellanox’s complete FDR 56Gb/s InfiniBand solution
  *Follow Mellanox on  Twitter  and Facebook

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and
Ethernet interconnect solutions and services for servers and storage. Mellanox
interconnect solutions increase data center efficiency by providing the
highest throughput and lowest latency, delivering data faster to applications
and unlocking system performance capability. Mellanox offers a choice of fast
interconnect products: adapters, switches, software and silicon that
accelerate application runtime and maximize business results for a wide range
of markets including high performance computing, enterprise data centers, Web
2.0, cloud, storage and financial services. More information is available at
www.mellanox.com.

Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost,
InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are
registered trademarks of Mellanox Technologies, Ltd. Connect-IB, FabricIT,
MetroX, MLNX-OS, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager
are trademarks of Mellanox Technologies, Ltd. All other trademarks are
property of their respective owners.

Contact:

Mellanox Technologies, Ltd.
Press/Media Contacts
Waggener Edstrom
Ashley Paula, +1-415-547-7024
apaula@waggeneredstrom.com
or
USA Investor Contact
Mellanox Technologies
Gwyn Lauber, +1-408-916-0012
gwyn@mellanox.com
or
Israel Investor Contact
Gelbart Kahana Investor Relations
Nava Ladin, +972-3-6074717
nava@gk-biz.com