LIME stands for Local Interpretable Model-Agnostic Explanations. LIME can explain the predictions of any machine learning classifier, not just deep learning models. It works by making small changes to the input for each instance and trying to map the local decision boundary for that instance. By doing so, it can see which variable has the most influence for that instance. It is explained in the following paper: Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 2016.
Let's look at using LIME to analyze the model from the previous ...