Chapter 23. Saving, Loading, and Serving Trained Models
23.0 Introduction
In the last 22 chapters and around 200 recipes, we have covered how to take raw data and use machine learning to create well-performing predictive models. However, for all our work to be worthwhile, we eventually need to do something with our model, such as integrate it with an existing software application. To accomplish this goal, we need to be able to save our models after training, load them when they are needed by an application, and then make requests to that application to get predictions.
ML models are typically deployed in simple web servers and designed to take input data and return predictions. This makes the model available to any client on the same network, so other services (such as UIs, users, etc.) can use the ML model to make predictions wherever they are in real time. An example use case would be using ML for item search on an ecommerce website, where an ML model would be served that takes in data about users and listings, and returns a likelihood of the user purchasing that listing. The search results need to be available in real time and available to the ecommerce application that is responsible for taking user searches and coordinating results for the user.
23.1 Saving and Loading a scikit-learn Model
Problem
You have a trained scikit-learn model and want to save it and load it elsewhere.
Solution
Save the model as a pickle file:
# Load libraries
import
joblib
from
sklearn.ensemble ...
Get Machine Learning with Python Cookbook, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.