Chapter 14. Observability and Monitoring AI Systems
If you are lucky enough that your AI system is small and has few moving parts, one person might be able to understand it well enough to quickly detect, diagnose, and fix any problems. However, all successful software systems grow in complexity (feature creep!), and systems support is needed to detect and diagnose operational problems. In short, you will need observability and monitoring for your AI system.
Observability has two pillars upon which everything is built: metrics and logging. Metrics are numerical measurements of the performance of infrastructural services and ML pipelines. Examples of common metrics are model performance, data quality, latency, throughput, KPIs, and costs. Logs are structured and unstructured text outputs and traces from infrastructural services and ML pipelines that provide insights into their internal state, error traces, and fine-grained performance. Metrics are building blocks for SLOs and elastic AI systems that automatically scale up/down the resources they use. Logs are fundamental to everything from error detection and debugging, to error analysis for LLMs, to model and feature monitoring.
This chapter covers observability and monitoring for all three classes of AI systems in this book. We first look at logging and metrics for batch ML systems and real-time ML systems. We will see that we need to separately log transformed and untransformed feature values for feature and model monitoring, ...