Video description
TensorFlow’s tf.distribute library helps you scale your model from a single GPU to multiple GPUs and to multiple machines using simple APIs that require very few changes to your existing code.
Join Taylor Robie and Priya Gupta (Google) to learn how you can use tf.distribute to scale your machine learning model on a variety of hardware platforms ranging from commercial cloud platforms to dedicated hardware. You’ll learn tools and tips to get the best scaling for your training in TensorFlow.
Prerequisite knowledge
- Familiarity with TensorFlow
What you'll learn
- Learn how to distribute TensorFlow using best practices in 2.0 on a variety of equipment
Product information
- Title: Scaling TensorFlow using tf.distribute
- Author(s):
- Release date: February 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0636920373612
You might also like
book
What's New in TensorFlow 2.0
Get to grips with key structural changes in TensorFlow 2.0 Key Features Explore TF Keras APIs …
video
TFX: Production ML pipelines with TensorFlow
ML development often focuses on metrics, delaying work on deployment and scaling issues. ML development designed …
video
TensorFlow model optimization: Quantization and pruning
Join Raziel Alverez (Google) to learn from TensorFlow performance experts who cover topics including optimization, quantization, …
book
Mastering Computer Vision with TensorFlow 2.x
Apply neural network architectures to build state-of-the-art computer vision applications using the Python programming language Key …