Guided backpropagation (Striving for Simplicity: The All Convolutional Net, https://arxiv.org/abs/1412.6806) allows us to visualize the features that are learned by a single unit of one layer of a CNN. The following diagram shows how the algorithm works:
Here is the step-by-step execution:
- First, we start with a regular CNN (for example, AlexNet, VGG, and so on) with ReLU activations.
- Then, we feed the network with a single image f(0) and propagate it forward until we get to the layer, l, we're interested in. This could be any network ...