Model interpretability is an area that needs special attention because it is connected with model adoption in particular and AI adoption in general. Users will adopt a model and framework if they can explain the decisions or predictions generated by the deep learning model. In this chapter, you will explore a new framework called Captum, which consists of a set of algorithms that can explain or help us interpret the predictions, model results, and layers of a neural network model. ...
10. PyTorch Model Interpretability and Interface to Sklearn
Get PyTorch Recipes: A Problem-Solution Approach to Build, Train and Deploy Neural Network Models now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.