Google data center Available

Google TPU v5p

TPU v5 ยท v5p Architecture

The Google TPU v5p is Google's most powerful Tensor Processing Unit, designed for training the largest AI models. With 95GB of HBM per chip and 8960 MXUs delivering 459 TFLOPS of BF16 performance, it powers Google's most demanding internal workloads including Gemini. TPU v5p pods scale to thousands of chips connected via high-bandwidth ICI interconnect for distributed training.

Key Features

8960 Matrix Multiply Units 95GB HBM per chip ICI interconnect Scalable to thousands of chips Optimized for JAX/TensorFlow

Full Specifications

Compute

Architecture TPU v5p
MXUs 8,960
BF16 Performance 459 TFLOPS

Memory

Memory Size 95 GB
Memory Type HBM
Memory Bandwidth 2765 GB/s

Power & Physical

Form Factor Custom ASIC

Features & Connectivity

NVLink Support No
Multi-GPU Support Yes

Availability

MSRP (USD) Contact for pricing
Release Date Dec 2023
Status Available

Industries

Use Cases

LLM Training Large-scale Model Training Generative AI Scientific Computing Multi-modal AI

Interested in the Google TPU v5p?

Get pricing, availability, and bulk discount information from our team.

Enquire Now

Related GPUs

Google data center

Google TPU v5e

Memory

16GB HBM

Available View Specs
Google data center

Google TPU v4

Memory

32GB HBM

Available View Specs