Using Multiple Executors

It should be apparent to the reader that there are many features of TensorFlow and computational graphs that lend itself naturally to being computed in parallel. The computational graph can be broken up on different processors as well as processing different batches. We will address how to access different processors on the same machine in this recipe.

Getting ready

For this recipe, we will show how to access multiple devices on the same system and train on them. This is a very common occurrence, as along with a CPU, a machine may have one or more GPUs that can share the computationl load. If TensorFlow can access these devices, it will automatically distribute the computations to the multiple devices via a greedy process. ...

Get TensorFlow Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.