Google data center Available

Google TPU v4

TPU v4 ยท v4 Architecture

The Google TPU v4 is a high-performance training accelerator that powered many of Google's foundational AI models. With 32GB HBM and 275 TFLOPS BF16 per chip, TPU v4 pods scale up to 4096 chips using optical circuit switching for flexible network topologies, enabling efficient distributed training at massive scale.

Key Features

32GB HBM per chip 275 TFLOPS BF16 ICI interconnect 4096-chip pod configurations Optical Circuit Switching

Full Specifications

Compute

Architecture TPU v4
BF16 Performance 275 TFLOPS

Memory

Memory Size 32 GB
Memory Type HBM
Memory Bandwidth 1200 GB/s

Power & Physical

Form Factor Custom ASIC

Features & Connectivity

NVLink Support No
Multi-GPU Support Yes

Availability

MSRP (USD) Contact for pricing
Release Date Jul 2022
Status Available

Industries

Use Cases

LLM Training Large-scale Training Scientific Computing Drug Discovery

Interested in the Google TPU v4?

Get pricing, availability, and bulk discount information from our team.

Enquire Now

Related GPUs

Google data center

Google TPU v5p

Memory

95GB HBM

Available View Specs
Google data center

Google TPU v5e

Memory

16GB HBM

Available View Specs