NVIDIA data center Available

NVIDIA A30

A-Series ยท Ampere Architecture

The NVIDIA A30 is a mainstream data center accelerator optimized for AI inference and lightweight training workloads. Its 165W TDP and MIG support make it ideal for dense, multi-tenant inference deployments.

Key Features

MIG support (4 instances) Low 165W TDP 24GB HBM2e PCIe 4.0

Full Specifications

Compute

Architecture Ampere
Process Node 7nm TSMC
CUDA Cores 3,584
Tensor Cores 224
Base Clock 930 MHz
Boost Clock 1440 MHz
FP32 Performance 10.32 TFLOPS
FP16 Performance 165 TFLOPS
BF16 Performance 165 TFLOPS
INT8 Performance 330 TOPS

Memory

Memory Size 24 GB
Memory Type HBM2e
Memory Bus 3072-bit
Memory Bandwidth 933 GB/s

Power & Physical

TDP 165W
Form Factor PCIe
Slot Width 2-slot
Card Length 267 mm
Power Connectors 1x 8-pin

Features & Connectivity

PCIe Version PCIe 4.0 x16
NVLink Support No
Multi-GPU Support Yes

Availability

MSRP (USD) Contact for pricing
Release Date Jun 2021
Status Available

Industries

Use Cases

AI Inference Data Analytics Edge AI Video Analytics

Interested in the NVIDIA A30?

Get pricing, availability, and bulk discount information from our team.

Enquire Now

Related GPUs

NVIDIA data center

NVIDIA H100 SXM

Memory

80GB HBM3

FP32

66.91 TFLOPS

TDP

700W

FP16

989.4 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA H100 PCIe

Memory

80GB HBM3

FP32

51.22 TFLOPS

TDP

350W

FP16

756 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA H200 SXM

Memory

141GB HBM3e

FP32

66.91 TFLOPS

TDP

700W

FP16

989.4 TFLOPS

Available View Specs
NVIDIA data center

NVIDIA B200

Memory

192GB HBM3e

FP32

90 TFLOPS

TDP

1000W

FP16

1800 TFLOPS

Available View Specs