Graphcore IPU
Realizes: machine intelligence workloads
Graphcore's Intelligence Processing Unit (IPU) is a massively parallel AI accelerator composed of thousands of SRAM-backed tile cores linked by an exchange-style interconnect, enabling sparse tensor graph processing workloads in IPU-POD16 and IPU-M2000 systems.
Examples
🔗
Transformer training on Graphcore IPU
Training a large transformer language model with fine-grained model parallelism and low-latency tensor math on Graphcore IPUs.
matrix multiplication; sparse and dense tensor contractions; collective inter-IPU communication
high
large
~1.5 pJ/MAC
🔗
Graphcore IPU-POD16 GNN workloads
The IPU-POD16 cluster powered by IPU-M2000 modules runs sparse tensor graph processing pipelines to accelerate GNN training and inference across large knowledge graphs with sub-microsecond token cycles.
Graph neural network message passing
Sparse tensor graph traversal
seconds
large
mJ