Chapter 9Model Explainability on Vertex AI

In this chapter, we will cover what explainable AI is and why it's important in machine learning. We will talk about the details of model explainability techniques on the Vertex AI platform and use cases to apply.

Model Explainability on Vertex AI

For a team developing ML models, the responsibility to explain model predictions increases as the impact of predictions on business outcomes increases. For example, consumers are likely to accept a movie recommendation from an ML model without needing an explanation. The consumer may or may not agree with the recommendation, but the need to justify the prediction is relatively low on the model developers.

On the contrary, if an ML model predicts whether a credit loan application is approved or a patient's drug dosage is correct, the model developers are responsible for explaining the prediction. They need to address questions such as ...

Get Official Google Cloud Certified Professional Machine Learning Engineer Study Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.