CHAPTER 4Explaining Artificial Intelligence, Machine Learning, and Deep Learning Models

Due to the black box nature of some, not all, artificial intelligence (AI) and machine learning algorithms, the model logic and the subsequent decisions are often hard to explain. This inability to explain the modeling approach, mechanics, and limitations may apply to both a technical and nontechnical stakeholder. For example, a model validator will want to assess that the model is performing as expected, while senior management will want to understand what will happen if the model produces incorrect predictions. In both cases, if the functional form of the model does not lend itself to human intuition, and/or the model development process is opaque, these questions become difficult to answer.

Firstly, considering the model logic, for some AI and machine learning, the lack of explainability stems from the complex engineered features that may include sophisticated and nonlinear transformations. In this case, the machine-created model inputs and modeled relationships between inputs lack human intuition.

In some cases, the lack of explainability stems from the model optimization process that is driven by the model itself rather than human intelligence, which means that the “learning” of the model is often captured within the model rather than by a human.

In a speech,1 Governor Lael Brainard of the Federal Reserve highlighted the benefits of AI and machine learning of greater accuracy and speed ...

Get Risk Modeling now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.