05/04/2021

Exploring the limits of concurrency in ml training on google tpus

Sameer Kumar, Yu (Emma) Wang, Cliff Young, James Bradbury, Naveen Kumar, Dehao Chen, Andy Swing

Keywords:

Abstract: Recent results in language understanding using neural networks have required training hardware of unprecedented scale, with thousands of chips cooperating on a single training run. This paper presents techniques to scale ML models on the Google TPU Multipod, a mesh with 4096 TPU-v3 chips. We discuss model parallelism to overcome scaling limitations from the fixed batch size in data parallelism, communication/collective optimizations, distributed evaluation of training metrics, and host input processing scaling optimizations. These techniques are demonstrated in both the TensorFlow and JAX programming frameworks. We also present performance results from the recent Google submission to the MLPerf-v0.7 benchmark contest, achieving record training times from 16 to 28 seconds in four MLPerf models on the Google TPU-v3 Multipod machine.

The video of this talk cannot be embedded. You can watch it here:
https://slideslive.com/38952755
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at MLSYS 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers