Interpreting models to ensure fairness
In Chapter 8, Privacy, Debugging, and Launching Your Products, we discussed model interpretability as a debugging method. We used LIME to spot the features that the model is overfitting to.
In this section, we will use a slightly more sophisticated method called SHAP (SHapley Additive exPlanation). SHAP combines several different explanation approaches into one neat method. This method lets us generate explanations for individual predictions as well as for entire datasets in order to understand the model better.
You can find SHAP on GitHub at https://github.com/slundberg/shap and install it locally with pip install shap
. Kaggle kernels have SHAP preinstalled.
Tip
The example code given here is from the SHAP example ...
Get Machine Learning for Finance now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.