8 Scaling up AutoML

This chapter covers

  • Loading large datasets into memory batch by batch
  • Using multiple GPUs to speed up search and training
  • Using Hyperband to efficiently schedule model training to make the best use of the available computing resources
  • Using pretrained models and warm-start to accelerate the search process

This chapter introduces various techniques for large-scale training—for example, using large datasets to train large models on multiple GPUs. For datasets that are too big to fit into memory all at once, we’ll show you how to load them batch by batch during the training. We’ll also introduce different parallelization strategies to distribute the training and search processes onto multiple GPUs. In addition, we’ll show you ...

Get Automated Machine Learning in Action now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.