Top of the page

H100 GPUs - All you need to know

The specifications for GPU cards can be long and hard to absorb, we try and make it easy for users, especially new users, to understand.

  1. The NVIDIA H100 card is a dual-slot, 10.5 inch PCI Express Gen5 card based on the NVIDIA Hopperâ„¢ architecture.
  2. It uses a passive heat sink for cooling, which requires system airflow to operate the card properly within its thermal limits.
  3. The NVIDIA H100 PCIe operates unconstrained up to its maximum thermal design power (TDP) level of 350W to accelerate applications that require the fastest computational speed and highest data throughput.
  4. The NVIDIA H100 PCIe debuts the world’s highest PCIe card memory bandwidth greater than 2,000 gigabytes per second (GBps). This speeds up time to solution for the largest models and largest data sets.
  5. The NVIDIA H100 PCIe card features Multi-Instance GPU (MIG) capability. This can be used to partition the GPU into as many as seven hardware-isolated GPU instances, providing a unified platform that enables elastic data centers to adjust dynamically to shifting workload demands.
  6. The versatility of the NVIDIA H100 means that IT managers can maximize the utility of every GPU in their data center.
  7. NVIDIA H100 PCIe cards use three NVIDIA® NVLink® bridges. They are the same as the bridges used with NVIDIA A100 PCIe cards. This allows two NVIDIA H100 PCIe cards to be connected to deliver 900 GB/s bidirectional bandwidth or 5x the bandwidth of PCIe Gen5, to maximize application performance for large workloads.

We are a premier NVIDIA partner and have deployed many H100s to date, we’d be happy to help you run through your H100 requirements.


General Enquiry

H100 GPUs - All you need to know

The specifications for GPU cards can be long and hard to absorb, we try and make it easy for users, especially new users, to understand.

  1. The NVIDIA H100 card is a dual-slot, 10.5 inch PCI Express Gen5 card based on the NVIDIA Hopperâ„¢ architecture.
  2. It uses a passive heat sink for cooling, which requires system airflow to operate the card properly within its thermal limits.
  3. The NVIDIA H100 PCIe operates unconstrained up to its maximum thermal design power (TDP) level of 350W to accelerate applications that require the fastest computational speed and highest data throughput.
  4. The NVIDIA H100 PCIe debuts the world’s highest PCIe card memory bandwidth greater than 2,000 gigabytes per second (GBps). This speeds up time to solution for the largest models and largest data sets.
  5. The NVIDIA H100 PCIe card features Multi-Instance GPU (MIG) capability. This can be used to partition the GPU into as many as seven hardware-isolated GPU instances, providing a unified platform that enables elastic data centers to adjust dynamically to shifting workload demands.
  6. The versatility of the NVIDIA H100 means that IT managers can maximize the utility of every GPU in their data center.
  7. NVIDIA H100 PCIe cards use three NVIDIA® NVLink® bridges. They are the same as the bridges used with NVIDIA A100 PCIe cards. This allows two NVIDIA H100 PCIe cards to be connected to deliver 900 GB/s bidirectional bandwidth or 5x the bandwidth of PCIe Gen5, to maximize application performance for large workloads.

We are a premier NVIDIA partner and have deployed many H100s to date, we’d be happy to help you run through your H100 requirements.


General Enquiry