May 2020
Beginner to intermediate
430 pages
10h 39m
English
TensorFlow has an API called tf.distribute.Strategy to distribute training across multiple GPUs. Training at scale for Google Cloud is described in detail at https://cloud.google.com/ai-platform/training/docs/training-at-scale.
Distributed training using TensorFlow is covered using the tf.distribute.Strategy API. Using this API, TensorFlow training can be distributed using multiple GPUs or TPUs. For a detailed overview of distributed training, including examples, go to https://www.tensorflow.org/guide/distributed_training.
Distributed training can also be set up in a cloud compute engine. In order to turn this functionality on, enable Cloud Shell in GCP. In the TensorFlow cluster, set up a virtual machine instance ...