TensorFlow’s tf.distribute library helps you scale your model from a single GPU to multiple GPUs and to multiple machines using simple APIs that require very few changes to your existing code.
Join Taylor Robie and Priya Gupta (Google) to learn how you can use tf.distribute to scale your machine learning model on a variety of hardware platforms ranging from commercial cloud platforms to dedicated hardware. You’ll learn tools and tips to get the best scaling for your training in TensorFlow.
- Familiarity with TensorFlow
What you'll learn
- Learn how to distribute TensorFlow using best practices in 2.0 on a variety of equipment
- Title: Scaling TensorFlow using tf.distribute
- Release date: February 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0636920373612
You might also like
Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition
Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. …
51+ hours of video instruction. Overview The professional programmer’s Deitel® video guide to Python development with …
O'Reilly Strata Data Conference 2019 - New York, New York
The 2019 Strata Data Conference NYC, the biggest Big Data conference in the world, was a …
O'Reilly Artificial Intelligence Conference 2019 - San Jose, California
The O’Reilly Artificial Intelligence Conference San Jose 2019 was some of the world’s top AI practitioners …