October 2018
Beginner
362 pages
9h 32m
English
What if we'd like to scale out our compute resources? We can distribute our TensorFlow processes over multiple workers to make training faster and easier. There are actually three frameworks for distributing TensorFlow: native distributed TensorFlow, TensorFlowOnSpark, and Horovod. In this section, we will be exclusively focusing on native distributed TensorFlow.
In the world of distributed processing, there are two approaches that we can take to distribute the computational load of our model, that is, model parallelism and data parallelism:
Read now
Unlock full access