December 2018
Beginner to intermediate
684 pages
21h 9m
English
Deep neural networks and complex ensembles can raise suspicion when they are considered impenetrable black-box models, in particular in light of the risks of backtest overfitting. We introduced several methods to gain insights into how these models make predictions in Chapter 11, Gradient Boosting Machines.
In addition to conventional measures of feature importance, the recent game-theoretic innovation of SHapley Additive exPlanations (SHAP) is a significant step towards understanding the mechanics of complex models. SHAP values allow for exact attribution of features and their values to predictions so that it becomes easier to validate the logic of a model in the light of specific theories about ...