# The NVIDIA Quantum InfiniBand Platform

Bring end-to-end high-performance networking to scientific computing, AI, and cloud data centers.

[Learn More](https://www.nvidia.com/en-us/networking/products/infiniband/quantum-x800.md)

### Introduction

## NVIDIA Quantum InfiniBand Networking Solutions

Complex workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly parallelized algorithms. As these needs continue to grow, NVIDIA Quantum InfiniBand—the world’s only fully offloadable, In-Network Computing platform—provides dramatic leaps in performance to achieve faster time to discovery with less cost and complexity.

## New Co-Packaged Silicon Photonic Networking Switches to Scale to Millions of GPUs, Multi-Site AI Factories

[Learn More](https://www.nvidia.com/en-us/networking/products/silicon-photonics.md)

## NVIDIA Quantum-X800 InfiniBand for Highest-Performance AI-Dedicated Infrastructure

[Read Press Release](https://nvidianews.nvidia.com/news/networking-switches-gpu-computing-ai)

[Read Data Sheet](https://nvdam.widen.net/s/hbp8zz7fvt/solution-overview-gtcspring24-quantum-x800-3175164)

### Products

## The NVIDIA Quantum InfiniBand Platform

### InfiniBand Adapters

As part of the NVIDIA Quantum InfiniBand Networking Platform, NVIDIA® ConnectX® InfiniBand host channel adapters (HCAs) provide ultra-low latency, extreme throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for today's modern workloads.

[Learn More](https://www.nvidia.com/en-us/networking/infiniband-adapters.md)

### Data Processing Units (DPUs)

The NVIDIA BlueField® DPUs combine powerful computing, high-speed networking, and extensive programmability to deliver software-defined, hardware-accelerated solutions for the most demanding workloads. From accelerated AI and scientific computing to [cloud-native supercomputing](https://www.nvidia.com/en-us/networking/products/cloud-native-supercomputing.md), BlueField redefines what’s possible.

[Learn More](https://www.nvidia.com/en-us/networking/products/data-processing-unit.md)

[Learn How DPUs Accelerate Data-Intensive Workloads](https://www.nvidia.com/en-us/networking/products/data-processing-unit/hpc.md)

### InfiniBand Switches

NVIDIA Quantum InfiniBand switch systems deliver the highest performance and port density available. Innovative capabilities such as NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ and advanced management features such as self-healing network capabilities, quality of service, enhanced virtual lane mapping, and NVIDIA In-Network Computing acceleration engines provide a performance boost for industrial, AI, and scientific applications.

[Learn More](https://www.nvidia.com/en-us/networking/infiniband-switching.md)

### Routers and Gateway Systems

NVIDIA Quantum InfiniBand systems provide the highest scalability and subnet isolation using InfiniBand routers and InfiniBand-to-Ethernet gateway systems. The latter is used to enable a scalable and efficient way to connect InfiniBand data centers to Ethernet infrastructures.

[Learn More](https://www.nvidia.com/en-us/networking/infiniband/gateway-systems.md)

### Long-Haul Systems

NVIDIA MetroX® long-haul systems can seamlessly connect remote NVIDIA Quantum InfiniBand data centers, storage, and other InfiniBand platforms. They can extend the reach of InfiniBand up to 40 kilometers, enabling native InfiniBand connectivity between remote data centers or between data center and remote storage infrastructures for high availability and disaster recovery.

[Learn More](https://www.nvidia.com/en-us/networking/infiniband-long-haul-systems.md)

### Cables and Transceivers

LinkX® cables and transceivers are designed to maximize the performance of HPC networks, requiring high-bandwidth, low-latency, highly reliable connections between InfiniBand elements.

[Learn More](https://www.nvidia.com/en-us/networking/interconnect.md)

### Capabilities

## How InfiniBand Enhances the Network

### In-Network Computing

[NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™](https://docs.nvidia.com/networking/software/accelerator-software/index.html) offloads collective communication operations to the switch network, decreasing the amount of data traversing the network, reducing the time of Message Passing Interface (MPI) operations, and increasing data center efficiency.

### Self-Healing Network

NVIDIA Quantum InfiniBand with self-healing network capabilities overcomes link failures, enabling network recovery 5,000X faster than any other software-based solution. These capabilities take advantage of the intelligence built into the latest generation of InfiniBand switches.

### Quality of Service

NVIDIA Quantum InfiniBand is the only high-performance interconnect solution with proven quality-of-service capabilities, including advanced congestion control and adaptive routing, resulting in unmatched network efficiency.

### Network Topologies

NVIDIA Quantum InfiniBand offers centralized management and supports any topology, including Fat Tree, Hypercubes, multi-dimensional Torus, and Dragonfly+. Routing algorithms optimize performance when designing a topology for particular application communication patterns.

### Software

## The InfiniBand Software Stack

### MLNX\_OFED

OFED from [OpenFabrics Alliance](https://www.openfabrics.org/) has been collaboratively developed and tested by high-performance input/output (IO) manufacturers. NVIDIA MLNX\_OFED is an NVIDIA-tested version of OFED.

[Learn More](https://network.nvidia.com/products/infiniband-drivers/linux/mlnx_ofed/)

### HPC-X

The NVIDIA HPC-X® is a comprehensive MPI and SHMEM/PGAS software suite. HPC-X leverages InfiniBand In-Network Computing and acceleration engines to optimize research and industry applications.

[Learn More](https://developer.nvidia.com/networking/hpc-x)

### UFM

The NVIDIA Unified Fabric Manager (UFM®) platform empowers data center administrators to efficiently provision, monitor, manage, and proactively troubleshoot their InfiniBand network infrastructure.

[Learn More](https://www.nvidia.com/en-us/networking/infiniband/ufm.md)

### Magnum IO

NVIDIA Magnum IO™ utilizes network IO, In-Network Computing, storage, and IO management to simplify and speed up data movement, access, and management for multi-GPU, multi-node systems.

[Learn More](https://www.nvidia.com/en-us/data-center/magnum-io.md)

### Resources

## The Latest in InfiniBand

#### Configure Your Cluster

[Get Started](https://www.mellanox.com/clusterconfig/)

#### Take Networking Courses

[Learn More](https://academy.nvidia.com)

#### Ready to Purchase?

[How to Buy](https://store.nvidia.com/en-us/networking/)

## Next Steps

## Ready to Get Started?

### Configure Your Cluster

Use this online tool to configure clusters based on fat tree with two levels of switch systems and Dragonfly+ topologies.

[Get Started](https://www.nvidia.com/en-us/networking/infiniband-configurator.md)

### Take Networking Courses

Explore deep technical training topics on NVIDIA Quantum InfiniBand networking through the NVIDIA Academy.

[Learn More](https://academy.nvidia.com/en/)

### Ready to Purchase?

Visit the NVIDIA marketplace to discover more information on how to purchase NVIDIA networking solutions.

[How to Buy](https://marketplace.nvidia.com/en-us/enterprise/networking/)

### Next Steps

## Ready to Get Started?

## Configure Your Cluster

Use this online tool to configure clusters based on fat tree with two levels of switch systems and Dragonfly+ topologies.

[Get Started](https://www.nvidia.com/en-us/networking/infiniband-configurator.md)

### Take Networking Courses

Explore deep technical training topics on NVIDIA Quantum InfiniBand networking through the NVIDIA Academy.

[Learn More](https://academy.nvidia.com/en/)

### Ready to Purchase?

Visit the NVIDIA Marketplace to discover more information on how to purchase NVIDIA networking solutions.

[How to Buy](https://marketplace.nvidia.com/en-us/enterprise/networking/)