Define and train the graph for asynchronous updates

As discussed previously, and shown in the diagram here, in asynchronous updates all the worker tasks send the parameter updates when they are ready, and the parameter server updates the parameters and sends back the parameters. There is no synchronization or waiting or aggregation of parameter updates:

The full code for this example is in ch-15_mnist_dist_async.py. You are encouraged to modify and explore the code with your own datasets.

For asynchronous updates, the graph is created and trained with the following steps:

  1. The definition of the graph is done within the with  block:
with tf.device(device_func): ...

Get Mastering TensorFlow 1.x now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.