Chapter 11. Monitor and Update Models

Once a model is deployed, its performance should be monitored just like any other software system. As they did in “Test Your ML Code”, regular software best practices apply. And just like in “Test Your ML Code”, there are additional things to consider when dealing with ML models.

In this chapter, we will describe key aspects to keep in mind when monitoring ML models. More specifically, we will answer three questions:

  1. Why should we monitor our models?

  2. How do we monitor our models?

  3. What actions should our monitoring drive?

Let’s start by covering how monitoring models can help decide when to deploy a new version or surface problems in production.

Monitoring Saves Lives

The goal of monitoring is to track the health of a system. For models, this means monitoring their performance and the quality of their predictions.

If a change in user habits suddenly causes a model to produce subpar results, a good monitoring system will allow you to notice and react as soon as possible. Let’s cover some key issues that monitoring can help us catch.

Monitoring to Inform Refresh Rate

We saw in “Freshness and Distribution Shift” that most models need to be regularly updated to maintain a given level of performance. Monitoring can be used to detect when a model is not fresh anymore and needs to be retrained.

For example, let’s say that we use the implicit feedback that we get from our users (whether they click on recommendations, for example) to estimate ...

Get Building Machine Learning Powered Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.