Video description
TensorFlow’s tf.distribute library helps you scale your model from a single GPU to multiple GPUs and to multiple machines using simple APIs that require very few changes to your existing code.
Join Taylor Robie and Priya Gupta (Google) to learn how you can use tf.distribute to scale your machine learning model on a variety of hardware platforms ranging from commercial cloud platforms to dedicated hardware. You’ll learn tools and tips to get the best scaling for your training in TensorFlow.
Prerequisite knowledge
- Familiarity with TensorFlow
What you'll learn
- Learn how to distribute TensorFlow using best practices in 2.0 on a variety of equipment
Product information
- Title: Scaling TensorFlow using tf.distribute
- Author(s):
- Release date: February 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0636920373612
You might also like
book
What's New in TensorFlow 2.0
Get to grips with key structural changes in TensorFlow 2.0 Key Features Explore TF Keras APIs …
video
TensorFlow model optimization: Quantization and pruning
Join Raziel Alverez (Google) to learn from TensorFlow performance experts who cover topics including optimization, quantization, …
video
Packaging Machine Learning Models with Docker
One of the important aspects of MLOps, also known as Machine Learning Operations or Operationalizing Machine …
video
Advanced model deployments with TensorFlow Serving
TensorFlow Serving is one of the cornerstones in the TensorFlow ecosystem. It has eased the deployment …