Chapter 8. Train and Optimize Models at Scale

Peter Drucker, one of Jeff Bezos’s favorite business strategists, once said, “If you can’t measure it, you can’t improve it.” This quote captures the essence of this chapter, which focuses on measuring, optimizing, and improving our predictive models.

In the previous chapter, we trained a single model with a single set of hyper-parameters using Amazon SageMaker. We also demonstrated how to fine-tune a pre-trained BERT model to build a review-text classifier model to predict the sentiment of product reviews in the wild from social channels, partner websites, etc.

In this chapter, we will use SageMaker Experiments to measure, track, compare, and improve our models at scale. We also use SageMaker Hyper-Parameter Tuning to choose the best hyper-parameters for our specific algorithm and dataset. We also show how to perform distributed training using various communication strategies and distributed file systems. We finish with tips on how to reduce cost and increase performance using SageMaker Autopilot’s hyper-parameter-selection algorithm, SageMaker’s optimized pipe to S3, and AWS’s enhanced-networking hardware.

Automatically Find the Best Model Hyper-Parameters

Now that we understand how to track and compare model-training runs, we can automatically find the best hyper-parameters for our dataset and algorithm using a scalable process called hyper-parameter tuning (HPT) or hyper-parameter optimization (HPO). SageMaker natively supports ...

Get Data Science on AWS now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.