November 2019
Intermediate to advanced
304 pages
8h 40m
English
There are challenges involved in distributed neural network training. Some of these challenges include managing different hardware dependencies across master and worker nodes, configuring distributed training to produce good performance, memory benchmarks across the distributed clusters, and more. We discussed some of those concerns in the previous recipes. While keeping such configurations in place, we'll move on to the actual distributed training/evaluation. In this recipe, we will perform the following tasks: