Visualizing the outputs from intermediate layers will help us in understanding how the input image is being transformed across different layers. Often, the output from each layer is called an activation. To do this, we should extract output from intermediate layers, which can be done in different ways. PyTorch provides a method called register_forward_hook, which allows us to pass a function which can extract outputs of a particular layer.
By default, PyTorch models only store the output of the last layer, to use memory optimally. So, before we inspect what the activations from the intermediate layers look like, let's understand how to extract outputs from the model. Let's look at the following ...