6 Understanding layers and units
This chapter covers
- Dissecting a black-box convolutional neural network to understand the features or concepts that are learned by the layers and units
- Running the network dissection framework
- Quantifying the interpretability of layers and units in the convolutional neural network and how to visualize them
- Strengths and weaknesses of the network dissection framework
In chapters 3, 4, and 5, we focused our attention on black-box models and how to interpret them using various techniques such as partial dependence plots (PDPs), LIME, SHAP, anchors, and saliency maps. In chapter 5, we specifically focused on convolutional neural networks (CNNs) and visual attribution methods such as gradients and activation maps ...
Get Interpretable AI now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.