BREAKTHROUGH ACCELERATED CPU FOR AI and HPC
NVIDIA GRAcE Hopper™ Solutions
The NVIDIA GH200 Grace Hopper™ Superchip is a breakthrough accelerated CPU designed from the ground up for giant-scale AI and high-performance computing (HPC) applications.
AI REFERENCE ARCHITECTURE
GPU
SOLUTIONS
GPU
WORKSTATIONS
INFINIBAND
FABRICS
GPU
CABLING
LIQUID/AIR
COOLING
ADVANCING AI
- CPU+GPU designed for giant-scale AI and HPC
- New 900 gigabytes per second (GB/s) coherent interface, 7X faster than PCIe Gen5
- Supercharges accelerated computing and generative AI with HBM3 and HBM3e GPU memory
- Runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, HPC SDK, and Omniverse™
Grace Hopper Use Cases
GH200 Grace Hopper Superchip Platform for Era of Accelerated Computing and Generative AI

Financial Services

Computer Vision

Deep Learning
KEY FEATURES
NVIDIA NVLink-C2C is a memory-coherent, high-bandwidth, and low-latency interconnect for superchips. The heart of the GH200 Grace Hopper Superchip, it delivers up to 900 gigabytes per second (GB/s) of total bandwidth, which is 7X higher than PCIe Gen5 lanes commonly used in accelerated systems.
>> 72-core NVIDIA Grace CPU
>> NVIDIA H100 Tensor Core GPU
>> Up to 480GB of LPDDR5X memory with error-correction code (ECC)
>> Supports 96GB of HBM3 or 144GB of HBM3e
>> Up to 624GB of fast-access memory
>> NVLink-C2C: 900GB/s of coherent memory Datasheet
Massive Bandwidth for Compute Efficiency
heterogeneous accelerated platform
The Grace Hopper Superchip is the first true heterogeneous accelerated platform for HPC and AI workloads. It accelerates any applications with the strengths of both GPUs and CPUs while providing the simplest and most productive heterogeneous programming model to date, enabling scientists and engineers to focus on solving the world’s most important problems.
GPU SOLUTION CORE COMPONENTS
InfiniBand and Ethernet technologies enable networking functionality in GPU accelerated systems. Proper networking is critical to ensuring that your GPU solutions don't have any bottlenecks or suffer performance degradation for AI workloads.
HIGH PERFORMANCE INTERCONNECT
CORE COMPONENTS
With industry-tailored software and tools from the NVIDIA AI Enterprise software suite, enterprises can rely on this proven platform to build their AI applications faster and more cost-effectively.
ACCELERATE YOUR AI WORKLOADS
WANT TO KNOW MORE?
The NVIDIA GH200 Grace Hopper™ Superchip is a breakthrough accelerated CPU designed from the ground up for giant-scale AI and high-performance computing (HPC) applications.
AI REFERENCE ARCHITECTURE
GPU
SOLUTIONS
GPU
WORKSTATIONS
INFINIBAND
FABRICS
GPU
CABLING
LIQUID/AIR
COOLING
ADVANCING AI
- CPU+GPU designed for giant-scale AI and HPC
- New 900 gigabytes per second (GB/s) coherent interface, 7X faster than PCIe Gen5
- Supercharges accelerated computing and generative AI with HBM3 and HBM3e GPU memory
- Runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, HPC SDK, and Omniverse™
Grace Hopper Use Cases
GH200 Grace Hopper Superchip Platform for Era of Accelerated Computing and Generative AI

Financial Services

Computer Vision

Deep Learning
KEY FEATURES
NVIDIA NVLink-C2C is a memory-coherent, high-bandwidth, and low-latency interconnect for superchips. The heart of the GH200 Grace Hopper Superchip, it delivers up to 900 gigabytes per second (GB/s) of total bandwidth, which is 7X higher than PCIe Gen5 lanes commonly used in accelerated systems.
>> 72-core NVIDIA Grace CPU
>> NVIDIA H100 Tensor Core GPU
>> Up to 480GB of LPDDR5X memory with error-correction code (ECC)
>> Supports 96GB of HBM3 or 144GB of HBM3e
>> Up to 624GB of fast-access memory
>> NVLink-C2C: 900GB/s of coherent memory Datasheet
Massive Bandwidth for Compute Efficiency
NVLink-C2C memory coherency enables programming of both the Grace CPU Superchip and the Grace Hopper Superchip with a unified programming model.
heterogeneous accelerated platform
The Grace Hopper Superchip is the first true heterogeneous accelerated platform for HPC and AI workloads. It accelerates any applications with the strengths of both GPUs and CPUs while providing the simplest and most productive heterogeneous programming model to date, enabling scientists and engineers to focus on solving the world’s most important problems.
GPU SOLUTION CORE COMPONENTS
InfiniBand and Ethernet technologies enable networking functionality in GPU accelerated systems. Proper networking is critical to ensuring that your GPU solutions don't have any bottlenecks or suffer performance degradation for AI workloads.
HIGH PERFORMANCE INTERCONNECT
CORE COMPONENTS
With industry-tailored software and tools from the NVIDIA AI Enterprise software suite, enterprises can rely on this proven platform to build their AI applications faster and more cost-effectively.