Google TPU v3
Realizes: AI training and inference acceleration
Third-generation Google TPU pairs float32/16 matrix multiply arrays with HBM2 and Cloud TPU v3 pods contain 8x more TPU chips than the previous generation, delivering massive training and inference acceleration.
Examples
🔗
Cloud TPU v3 pod
Cloud TPU v3 pods built from Google TPU v3 chips and high-speed interconnect assemble 8x more TPU chips than TPU v2 pods for distributed training and inference.
Float32/16 matrix multiply and convolution pipelines
high
cloud-scale
low