8
Local Interpretable Model-Agnostic Explanations (LIME)
The expansion of artificial intelligence (AI) relies on trust. Users will reject machine learning (ML) systems they cannot trust. We will not trust decisions made by models that do not provide clear explanations. An AI system must provide clear explanations, or it will gradually become obsolete.
Local Interpretable Model-agnostic Explanations (LIME)'s approach aims at reducing the distance between AI and humans. LIME is people-oriented like SHAP and WIT. LIME focuses on two main areas: trusting a model and trusting a prediction. LIME provides a unique explainable AI (XAI) algorithm that interprets predictions locally.
I recommend a third area: trusting the datasets. A perfect model and ...
Get Hands-On Explainable AI (XAI) with Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.