NVIDIA data center Available

NVIDIA B200

B-Series ยท Blackwell Architecture

The NVIDIA B200 is the flagship Blackwell architecture GPU, delivering up to 2.5x the AI training performance of H100. With 192GB HBM3e and 8 TB/s bandwidth, it sets a new standard for training and running trillion-parameter models. The 5th-generation Tensor Cores and 2nd-generation Transformer Engine with FP4 precision support enable unprecedented inference throughput.

Key Features

5th-gen Tensor Cores NVLink 5.0 (1.8 TB/s) 2nd-gen Transformer Engine FP4 support 192GB HBM3e Decompression Engine

Full Specifications

Compute

Architecture Blackwell
Process Node 4nm TSMC
CUDA Cores 18,432
Tensor Cores 576
FP32 Performance 90 TFLOPS
FP16 Performance 1800 TFLOPS
BF16 Performance 1800 TFLOPS
INT8 Performance 3600 TOPS

Memory

Memory Size 192 GB
Memory Type HBM3e
Memory Bus 6144-bit
Memory Bandwidth 8000 GB/s

Power & Physical

TDP 1000W
Form Factor SXM
Power Connectors SXM connector

Features & Connectivity

PCIe Version PCIe 6.0
NVLink Support Yes
Multi-GPU Support Yes

Availability

MSRP (USD) Contact for pricing
Release Date Nov 2024
Status Available

Industries

Use Cases

Trillion-parameter LLM Training Real-time LLM Inference Generative AI Scientific Simulation

Interested in the NVIDIA B200?

Get pricing, availability, and bulk discount information from our team.

Enquire Now

Related GPUs

NVIDIA data center

NVIDIA H100 SXM

Memory

80GB HBM3

FP32

66.91 TFLOPS

TDP

700W

FP16

989.4 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA H100 PCIe

Memory

80GB HBM3

FP32

51.22 TFLOPS

TDP

350W

FP16

756 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA H200 SXM

Memory

141GB HBM3e

FP32

66.91 TFLOPS

TDP

700W

FP16

989.4 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA GB200 NVL72

Memory

192GB HBM3e (per GPU)

FP32

90 TFLOPS

TDP

2700W

FP16

1800 TFLOPS

Available View Specs