Define and train the graph for asynchronous updates

As discussed previously, and shown in the diagram here, in asynchronous updates all the worker tasks send the parameter updates when they are ready, and the parameter server updates the parameters and sends back the parameters. There is no synchronization or waiting or aggregation of parameter updates:

The full code for this example is in ch-15_mnist_dist_async.py. You are encouraged to modify and explore the code with your own datasets.

For asynchronous updates, the graph is created and trained with the following steps:

  1. The definition of the graph is done within the with  block:
with tf.device(device_func): ...

Get Mastering TensorFlow 1.x now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.