June 2020
Intermediate to advanced
382 pages
11h 39m
English
Local Interpretable Model-Agnostic Explanations (LIME) is a model-agnostic approach that can explain individual predictions made by a trained model. Being model-agnostic, it can explain the predictions of most types of trained machine learning models.
LIME explains decisions by inducing small changes to the input for each instance. It can gather the effects on the local decision boundary for that instance. It iterates over the loop to provide details for each variable. Looking at the output, we can see which variable has the most influence on that instance.
Let's see how we can use LIME to make the individual predictions of our house price model explainable:
If you have never used LIME before, you need to install ...