You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
openpower.foundation_sw_dev/content/blog/interconnect-your-future-me...

71 lines
6.3 KiB
Markdown

---
title: "Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI"
date: "2015-10-05"
categories:
- "blogs"
tags:
- "openpower"
- "ibm"
- "nvidia"
- "mellanox"
- "department-of-energy"
- "coral"
- "featured"
- "hpc"
- "capi"
- "acceleration"
- "capi-series"
---
_By Scot Schultz Director of HPC and Technical Computing, Mellanox_
## Business Challenge
Some computing jobs are so large that they must be split into pieces and solved in parallel, distributed via the network across a number of computing nodes. We find some of the worlds largest computing jobs in the realm of scientific research, where continuous advancement will require extreme-scale computing with machines that are 500-to-1000 times more capable than todays supercomputers. As researchers constantly refine their models and push to increased resolutions, the demand for more parallel computation and advanced networking capabilities is paramount.
## Computing Challenge
Efficient high-performance computing systems require [high-bandwidth, low-latency connections](http://bit.ly/1Lctmnq) between thousands of multi-processor nodes, as well as high-speed storage systems. As a result of the ubiquitous data explosion and the ascendance of Big Data, especially unstructured data, todays systems need to move enormous amounts of data as well as perform more sophisticated analysis.
The network now becomes the critical element of gaining insight from todays massive flows of data.
## Solution
Only Mellanox delivers an industry standards based solutions with advanced native hardware acceleration engines, but leveraging the latest advancement from IBMs OpenPOWER architecture takes performance to whole new level.
Already deployed in over 50% of the worlds most powerful super computing systems, Mellanoxs high speed interconnect solutions are proven to deliver the highest scalability, efficiency, and unmatched performance for HPC systems. The latest [Mellanox EDR 100Gb/s interconnect architecture](http://bit.ly/1Lctmnq) includes native support for one of the newest innovations brought forth by OpenPOWER, [the Coherent Accelerator Processor Interface (CAPI)](http://ibm.co/1QVeo58).
[Mellanox 100Gb/s ConnectX®-4 architecture with native support for CAPI](http://bit.ly/1Lctmnq) is capable of handling communications of massive parallelism. By delivering up to 100Gb/s of reliable, zero-loss connectivity, ConnectX-4 with CAPI provides an optimized platform for moving enormous volumes of data. With much tighter integration between the Mellanox high-performance interconnect and the processor, POWER-based systems can rip through high volumes of data and bring compute and data closer together to derive greater insights. Mellanox ConnectX-4 can be leveraged for 100Gb CAPI-attached InfiniBand, Ethernet, or storage.
 
[![CAPI Interconnects with Mellanox Data Flow](images/CAPI-Mellanox-Interconnect-Data-Flow-1024x607.jpg)](http://bit.ly/1Vz7KTC)
CAPI also simplifies the memory management between interconnect and CPU which results in reduced overhead, higher performance and increased scalability. Because CAPI provides a level of integration that removes additional latency compared to platforms featuring traditional PCI-Express bus semantics, the Mellanox interconnect can move data in and out of the system with even greater efficiency.
Back to tackling the worlds toughest scientific problems Mellanox ConnectX-4 EDR 100Gb/s “Smart” interconnect technology and IBMs POWER architecture with CAPI can help. [Oak Ridge National Laboratory](http://1.usa.gov/1VxO4EN) and [Lawrence Livermore National Laboratory](http://1.usa.gov/1M9X2hi) for example, have chosen solutions utilizing OpenPOWER designs developed by [Mellanox](http://bit.ly/1LruDJ5), [IBM](http://ibm.co/1Nf4jSK), and [NVIDIA](http://bit.ly/1QThDtP) for the Department of Energys next generation Summit and Sierra supercomputer systems. Summit and Sierra will deliver raw computing power at more than 100 petaflops at peak performance, which will make them the most powerful computers in world.
From innovation in nanotechnologies, climate research, medical research and discovering renewable energies, Mellanox and members of the OpenPOWER ecosystem are leading innovations in high performance computing.
## Learn more about Mellanox 100Gb/s and CAPI
Mellanox CAPI attached interconnects are suitable for the largest deployments, but they are also accessible for more modest clusters, clouds, and commercial datacenters. Here are a few ways to get started.
- [Learn more about Mellanox ConnectX-4 100Gb Adapters](http://bit.ly/1RpVW5w)
- [Read the Mellanox ConnectX-4 Product Brief](http://bit.ly/1LruDJ5)
- [Follow a tutorial to get acquainted with your ConnectX-4 adapter on Linux](http://bit.ly/1FQWXSH)
- [Download a whitepaper on SwitchIB, the switch architecture for 100Gb interconnects](http://bit.ly/1Lctmnq)
- [Engage with others using Mellanox 100Gb technology and find solutions in the Developer Community](http://bit.ly/1RpVW5w)
Keep coming to see blog posts from IBM and other OpenPOWER Foundation partners on how you can use CAPI to accelerate computing, networking and storage.
- [CAPI Series 1: Accelerating Business Applications in the Data-Driven Enterprise with CAPI](https://openpowerfoundation.org/blogs/capi-drives-business-performance/)
- [CAPI Series 2: Using CAPI and Flash for larger, faster NoSQL and analytics](https://openpowerfoundation.org/blogs/capi-drives-business-performance/)
- [CAPI Series 4: Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/)
* * *
**_About Scot Schultz_**
_[![Scot Schultz, Mellanox](images/ScotSchultz.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/10/ScotSchultz.jpg)Scot Schultz is a HPC technology specialist with broad knowledge in operating systems, high speed interconnects and processor technologies. Joining the Mellanox team in March 2013 as Director of HPC and Technical Computing, Schultz is 25-year veteran of the computing industry. Scot currently maintains his role as Director of Educational Outreach, founding member of the HPC Advisory Council and of various other industry organizations. Follow him on Twitter: [@ScotSchultz](https://twitter.com/ScotSchultz)_