Chapter 2: Parameter Server and All-Reduce

As described in Chapter 1, Splitting Input Data, to keep model consistency among all the GPUs/nodes involved in a data parallel training job, we need to conduct model synchronization. In terms of this model synchronization core, distributed system architectures for data parallel training must be built up.

To guarantee model consistency, two methodologies can be applied.

First, we can keep the model parameters in one place (a centralized node). Whenever a GPU/node needs to conduct model training, it pulls the parameters from the centralized node, trains the model, then pushes back model updates to the centralized node. Model consistency is guaranteed since all the GPUs/nodes are pulling from the same ...

Get Distributed Machine Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.