This very simple example will serve us as an example of how the pieces of a distributed TensorFlow setup work.
In this sample, we will do a very simple task, which nevertheless takes all the needed steps in a machine learning process.
Distributed training cluster setup
The Ps Server will contain the different parameters of the linear function to solve (in this case just
b0), and the two worker servers will do the training of the variable, which will constantly update and improve upon the last one, working on a collaboration mode.
The sample code is as follows:
import tensorflow as ...