NVIDIA Hopper GPU
Realizes: Transformer and dense matrix acceleration
The Hopper family (H100) is NVIDIA's GPU architecture for large-scale transformer training, pairing a new Transformer Engine with CUDA/SIMT cores and tensor cores on a 4nm/5nm FinFET node; HGX H100 cabinets tie multiple Hopper GPUs via NVLink to deliver deterministic throughput for massive AI workloads.
Examples
🔗
NVIDIA DGX H100
The DGX H100 appliance binds eight H100 SXM modules via the HGX H100 fabric, delivering deterministic transformer training and inference through the Transformer Engine and NVLink-accelerated SIMT pipelines.
[MIXED_PRECISION_MATMUL,TRANSFORMER_ENGINE]
nanoseconds
large
pJ