Chapter 10: Implementing DL Explainability with MLflow

The importance of deep learning (DL) explainability is now well established, as we learned in the previous chapter. In order to implement DL explainability in a real-world project, it is desirable to log the explainer and the explanations as artifacts, just like other model artifacts in the MLflow server, so that we can easily track and reproduce the explanation. The integration of DL explainability tools such as SHAP (https://github.com/slundberg/shap) with MLflow can support different implementation mechanisms, and it is important to understand how these integrations can be used for our DL explainability scenarios. In this chapter, we will explore several ways to integrate the SHAP explanations ...

Get Practical Deep Learning at Scale with MLflow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.