6

Efficient Model Training

Similar to how we scaled up data processing pipelines in the previous chapter, we can reduce the time it takes to train deep learning (DL) models by allocating more computational resources. In this chapter, we will learn how to configure the TensorFlow (TF) and PyTorch training logic to utilize multiple CPU and GPU devices on different machines. First, we will learn how TF and PyTorch support distributed training without any external tools. Next, we will describe how to utilize SageMaker, since it is built to handle the DL pipeline on the cloud from end to end. Lastly, we will look at tools that have been developed specifically for distributed training: Horovod, Ray, and Kubeflow.

In this chapter, we’re going to cover ...

Get Production-Ready Applied Deep Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.