NVIDIA data center Available

NVIDIA H100 PCIe

H-Series ยท Hopper Architecture

The NVIDIA H100 PCIe brings Hopper architecture to standard PCIe server infrastructure. With 80GB HBM3 and 350W TDP, it offers a more accessible entry point to H100-class performance for inference-heavy workloads and mixed AI/HPC environments.

Key Features

4th-gen Tensor Cores PCIe 5.0 x16 Transformer Engine Lower TDP than SXM Drop-in PCIe form factor

Full Specifications

Compute

Architecture Hopper
Process Node 4nm TSMC
CUDA Cores 14,592
Tensor Cores 456
Base Clock 1095 MHz
Boost Clock 1755 MHz
FP32 Performance 51.22 TFLOPS
FP16 Performance 756 TFLOPS
BF16 Performance 756 TFLOPS
INT8 Performance 1513 TOPS

Memory

Memory Size 80 GB
Memory Type HBM3
Memory Bus 5120-bit
Memory Bandwidth 2039 GB/s

Power & Physical

TDP 350W
Form Factor PCIe
Slot Width 2-slot
Card Length 267 mm
Power Connectors 1x 16-pin

Features & Connectivity

PCIe Version PCIe 5.0 x16
NVLink Support No
Multi-GPU Support Yes

Availability

MSRP (USD) $25,000
Release Date Mar 2023
Status Available

Industries

Use Cases

AI Inference HPC Data Analytics Natural Language Processing

Interested in the NVIDIA H100 PCIe?

Get pricing, availability, and bulk discount information from our team.

Enquire Now

Related GPUs

NVIDIA data center

NVIDIA H100 SXM

Memory

80GB HBM3

FP32

66.91 TFLOPS

TDP

700W

FP16

989.4 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA H200 SXM

Memory

141GB HBM3e

FP32

66.91 TFLOPS

TDP

700W

FP16

989.4 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA B200

Memory

192GB HBM3e

FP32

90 TFLOPS

TDP

1000W

FP16

1800 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA GB200 NVL72

Memory

192GB HBM3e (per GPU)

FP32

90 TFLOPS

TDP

2700W

FP16

1800 TFLOPS

Available View Specs