5 Saliency mapping

This chapter covers

  • Characteristics that make convolutional neural networks inherently black-box
  • How to implement convolutional neural networks for image classification tasks
  • How to interpret convolutional neural networks using saliency mapping techniques, such as vanilla backpropagation, guided backpropagation, guided Grad-CAM, and SmoothGrad
  • Strengths and weaknesses of these saliency mapping techniques and how to perform sanity checks on them

In the previous chapter, we looked at deep neural networks and learned how to interpret them using model-agnostic methods that are local in scope. We specifically learned three techniques: LIME, SHAP, and anchors. In this chapter, we will focus on convolutional neural networks (CNNs), ...

Get Interpretable AI now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.