TensorFlow’s tf.distribute library helps you scale your model from a single GPU to multiple GPUs and to multiple machines using simple APIs that require very few changes to your existing code.
Join Taylor Robie and Priya Gupta (Google) to learn how you can use tf.distribute to scale your machine learning model on a variety of hardware platforms ranging from commercial cloud platforms to dedicated hardware. You’ll learn tools and tips to get the best scaling for your training in TensorFlow.
- Familiarity with TensorFlow
What you'll learn
- Learn how to distribute TensorFlow using best practices in 2.0 on a variety of equipment
- Title: Scaling TensorFlow using tf.distribute
- Release date: February 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0636920373612
You might also like
51+ hours of video instruction. Overview The professional programmer’s Deitel® video guide to Python development with …
Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition
Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. …
Python Data Science Handbook
For many researchers, Python is a first-class tool mainly because of its libraries for storing, manipulating, …
AI and Machine Learning for Coders
If you’re looking to make a career move from programmer to AI specialist, this is the …