Chapter 13. Hosting PyTorch Models for Serving
In the earlier chapters of this book, we looked at many scenarios for training ML models, including those for computer vision, NLP, and sequence modeling. But that was just the first step—a model is of little use without a way for other people to use its power! In this chapter, we’ll take a brief tour of some of the more popular tools that allow you to give them a way to do that.
You should note that taking a trained PyTorch model to a production-ready service will involve a lot more than just deploying it, and that the machine learning operations (MLOps) discipline is designed with that in mind. When you get into the world of serving these models, you’ll need to understand new challenges, such as handling real-time requests, managing computational resources, ensuring reliability, and maintaining performance under varying loads.
Ultimately, MLOps is about bridging the gap between data science and software engineering. That’s beyond the scope of this chapter, but there are some great books about it out there from O’Reilly, including Implementing MLOps in the Enterprise by Yaron Haviv and Noah Gift and LLMOps by Abi Aryan.
This chapter will introduce two popular approaches to serving PyTorch models in production environments.
We’ll begin with TorchServe, the official serving solution from PyTorch, which provides a robust framework that’s designed specifically for serving deep-learning models at scale. TorchServe offers out-of-the-box ...