5
Local Model-Agnostic Interpretation Methods
In the previous two chapters, we dealt exclusively with global interpretation methods. This chapter will foray into local interpretation methods, which are there to explain why a single prediction or a group of predictions was made. It will cover how to leverage SHapley Additive exPlanations (SHAP’s) KernelExplainer
and also another method called Local Interpretable Model-agnostic Explanations (LIME) for local interpretations. We will also explore how to use these methods with both tabular and text data.
These are the main topics we are going to cover in this chapter:
- Leveraging SHAP’s
KernelExplainer
for local interpretations with SHAP values - Employing LIME
- Using LIME for Natural Language Processing ...
Get Interpretable Machine Learning with Python - Second Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.