7 Explainability and Interpretability

Translucent does not even mean invisible – it means semi-transparent.

Billy Butcher (The Boys, 2019)

Synopsis

So far, we have learned about a variety of ML types and algorithms. We have also learned how to build ML models, examine such models, and deploy them. Collectively, we learned all about developing data-driven ML models. What we do not know yet is how to explain (or interpret) model predictions.1 For example, given a set of features, if a model predicts a certain outcome (i.e., a phenomenon will occur2 ), then the following question may arise: why does the model predict that the combination of these features will yield to this particular outcome?

This chapter hopes to shed some light on answers that can help us address the aforementioned question. In this pursuit, this chapter will introduce principles and associated methods for establishing explainability and interpretability3 from the lens of ML.

7.1 The Need for Explainability

As we know, a ML model maps a group of features to an outcome (e.g., tie concrete mix ingredients to the compressive strength property4 ). In our process, a properly validated model is one that attains a certain level of goodness; and thereby is declared proper and to have a permissible prediction capability.

Let us think about this for a minute. Surely, satisfying selected performance metrics and error indicators is a quantifiable measure that we can attach to the goodness of a model. This is the same ...

Get Machine Learning for Civil and Environmental Engineers now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.