Chapter 15. Model Management and Delivery

In this chapter, we’ll be discussing model management and delivery. We’ll start with a discussion of experiment tracking, and we’ll introduce MLOps and discuss some of the core concepts and levels of maturity for implementing MLOps processes and infrastructure. We’ll also discuss workflows at some depth, along with model versioning. We’ll then dive into both continuous delivery and progressing delivery.

Experiment Tracking

Experiments are fundamental to data science and ML. ML in practice is more of an experimental science than a theoretical one, so tracking the results of experiments, especially in production environments, is critical to being able to make progress toward your goals. We need rigorous processes and reproducible results, which has created a need for experiment tracking.

Debugging in ML is often fundamentally different from debugging in software engineering, because it’s often about a model not converging or not generalizing instead of some functional error such as a segmentation fault or stack overflow. Keeping a clear record of the changes to the model and data over time can be a big help when you’re trying to hunt down the source of the problem.

Even small changes, such as changing the width of a layer or the learning rate, can make a big difference in both the model’s performance and the resources required to train the model. So again, tracking even small changes is important.

And don’t forget that running experiments, ...

Get Machine Learning Production Systems now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.