Transposed convolutions

We know that applying a convolution repeatedly to an image reduces its size, but what if we would like to go in the opposite direction; that is, go from the shape of the output to the shape of the input while still maintaining local connectivity. To do this, we use transposed convolution, which draws its name from matrix transposition (which you should remember from Chapter 1, Vector Calculus). 

Let's suppose we have a 4 × 4 input and a 3 × 3 kernel. Then, we can rewrite the kernel as a 4 × 16 matrix, which we can use for matrix multiplications to carry out our convolutions. This looks as follows:


If you look closely, ...

Get Hands-On Mathematics for Deep Learning now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.