Chapter 4: LIME for Model Interpretability
In the previous chapters, we discussed the various technical concepts of Explainable AI (XAI) that are needed to build trustworthy AI systems. Additionally, we looked at certain practical examples and demonstrations using various Python frameworks to implement the concepts of practical problem solving, which are given in the GitHub code repository of this chapter. XAI has been an important research topic for quite some time, but it is only very recently that all organizations have started to adopt XAI as a part of the solution life cycle for solving business problems using AI. One such popular approach is Local Interpretable Model-Agnostic Explanations (LIME), which has been widely adopted to provide ...
Get Applied Machine Learning Explainability Techniques now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.