© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
P. MishraPyTorch Recipeshttps://doi.org/10.1007/978-1-4842-8925-9_10

10. PyTorch Model Interpretability and Interface to Sklearn

Pradeepta Mishra1  
(1)
Bangalore, Karnataka, India
 

Model interpretability is an area that needs special attention because it is connected with model adoption in particular and AI adoption in general. Users will adopt a model and framework if they can explain the decisions or predictions generated by the deep learning model. In this chapter, you will explore a new framework called Captum, which consists of a set of algorithms that can explain or help us interpret the predictions, model results, and layers of a neural network model. ...

Get PyTorch Recipes: A Problem-Solution Approach to Build, Train and Deploy Neural Network Models now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.