NVIDIA data center Available

NVIDIA H100 SXM

H-Series ยท Hopper Architecture

The NVIDIA H100 SXM is the flagship data center GPU built on the Hopper architecture. Designed for the most demanding AI training and HPC workloads, it delivers up to 9x faster AI training and 30x faster AI inference compared to the prior generation A100. With 80GB of HBM3 memory and 3.35 TB/s bandwidth, the H100 SXM is the gold standard for large language model training, scientific simulation, and enterprise AI deployment.

Key Features

4th-gen Tensor Cores NVLink 4.0 (900 GB/s) Transformer Engine PCIe Gen5 Confidential Computing

Full Specifications

Compute

Architecture Hopper
Process Node 4nm TSMC
CUDA Cores 16,896
Tensor Cores 528
Base Clock 1095 MHz
Boost Clock 1830 MHz
FP32 Performance 66.91 TFLOPS
FP16 Performance 989.4 TFLOPS
BF16 Performance 989.4 TFLOPS
INT8 Performance 1978.9 TOPS

Memory

Memory Size 80 GB
Memory Type HBM3
Memory Bus 5120-bit
Memory Bandwidth 3350 GB/s

Power & Physical

TDP 700W
Form Factor SXM5
Power Connectors SXM5 connector

Features & Connectivity

PCIe Version PCIe 5.0
NVLink Support Yes
Multi-GPU Support Yes

Availability

MSRP (USD) $30,000
Release Date Mar 2023
Status Available

Industries

Use Cases

LLM Training Large-scale Inference HPC Simulation Drug Discovery Climate Modeling

Interested in the NVIDIA H100 SXM?

Get pricing, availability, and bulk discount information from our team.

Enquire Now

Related GPUs

NVIDIA data center

NVIDIA H100 PCIe

Memory

80GB HBM3

FP32

51.22 TFLOPS

TDP

350W

FP16

756 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA H200 SXM

Memory

141GB HBM3e

FP32

66.91 TFLOPS

TDP

700W

FP16

989.4 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA B200

Memory

192GB HBM3e

FP32

90 TFLOPS

TDP

1000W

FP16

1800 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA GB200 NVL72

Memory

192GB HBM3e (per GPU)

FP32

90 TFLOPS

TDP

2700W

FP16

1800 TFLOPS

Available View Specs