Chapter 8. Model Quality and Continuous Evaluation

So far in this book, we have covered the design and implementation of vision models. In this chapter, we will dive into the important topic of monitoring and evaluation. In addition to beginning with a high-quality model, we also want to maintain that quality. In order to ensure optimal operation, it is important to obtain insights through monitoring, calculate metrics, understand the quality of the model, and continuously evaluate its performance.

Monitoring

So, we’ve trained our model on perhaps millions of images, and we are very happy with its quality. We’ve deployed it to the cloud, and now we can sit back and relax while it makes great predictions forever into the future… Right? Wrong! Just as we wouldn’t leave a small child alone to manage him or herself, we also don’t want to leave our models alone out in the wild. It’s important that we constantly monitor their quality (using metrics like accuracy) and computational performance (queries per second, latency, etc.). This is especially true when we’re constantly retraining models on new data that may contain distribution changes, errors, and other issues that we’ll want to be aware of.

TensorBoard

Often ML practitioners train their models without fully considering all the details. They submit a training job and check it every now and then until the job is finished. Then they make predictions using the trained model to see how it’s performing. This may not seem like a big ...

Get Practical Machine Learning for Computer Vision now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.