Chapter 12: Model Serving and Monitoring
In this chapter, we will reflect on the need to serve and monitor machine learning (ML) models in production and explore different means of serving ML models for users or consumers of the model. Then, we will revisit the Explainable Monitoring framework from Chapter 11, Key Principles for Monitoring Your ML System, and implement it for the business use case we have been solving using MLOps to predict the weather. The implementation of an Explainable Monitoring framework is hands-on. We will infer the deployed API and monitor and analyze the inference data using drifts (such as data drift, feature drift, and model drift) to measure the performance of an ML system. Finally, we will look at several concepts ...
Get Engineering MLOps now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.