Chapter 10: Optimizing Model Hosting and Inference Costs

The introduction of more powerful computers (notably with graphical processing units, or GPUs) and powerful machine learning (ML) frameworks such as TensorFlow has resulted in a generational leap in ML capabilities. As ML practitioners, our purview now includes optimizing the use of these new capabilities to maximize the value we get for the time and money we spend.

In this chapter, you'll learn how to use multiple deployment strategies to meet your training and inference requirements. You'll learn when to get and store inferences in advance versus getting them on demand, how to scale inference services to meet fluctuating demand, and how to use multiple models for model testing.  

In ...

Get Amazon SageMaker Best Practices now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.