Chapter 14. Model Serving Examples

This chapter provides three examples that take a hands-on approach to serving ML models effectively and efficiently. In the first example, we’ll take a deep dive into the deployment of TensorFlow and JAX models. In the second example, we’ll address how you can optimize your deployment setup with TensorFlow Profiler.

For our third example, we will introduce TorchServe, the model deployment setup for Torch-based models.

Example: Deploying TensorFlow Models with TensorFlow Serving

Using machine framework–specific deployment libraries through Python API implementations provides a number of performance benefits. In this example, we’ll focus on TensorFlow Serving (TF Serving), which allows you to deploy TensorFlow, Keras, JAX, and scikit-learn models effectively. If you’re interested in how to deploy PyTorch models, hop over to this chapter’s third example, where we’ll be focusing on TorchServe, the PyTorch-specific deployment library.

Let’s assume you have trained, evaluated, and exported a TensorFlow/Keras model. In this section, we’ll introduce how you can set up a TF Serving instance with Docker, show how to configure TF Serving, and then demonstrate how you can request predictions from the model server.

Exporting Keras Models for TF Serving

Before deploying your ML model, you need to export it. TF Serving supports the TensorFlow SavedModel format, which is serializing the model into a protocol buffer format. The following example shows how ...

Get Machine Learning Production Systems now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.