4 Model-agnostic methods: Local interpretability
This chapter covers
- Characteristics of deep neural networks
- How to implement deep neural networks that are inherently black-box models
- Perturbation-based model-agnostic methods that are local in scope, such as LIME, SHAP and anchors
- How to interpret deep neural networks using LIME, SHAP, and anchors
- Strengths and weaknesses of LIME, SHAP, and anchors
In the previous chapter, we looked at tree ensembles, especially random forest models, and learned how to interpret them using model-agnostic methods that are global in scope, such as partial dependence plots (PDPs) and feature interaction plots. We saw that PDPs are a great way of understanding how individual feature values impact the final model ...
Get Interpretable AI now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.