February 2019
Beginner to intermediate
308 pages
7h 42m
English
To cement our understanding, let's start off by building the most basic autoencoder, as shown in the following diagram:

So far, we have emphasized that the hidden layer (Latent Representation) should be of a smaller dimension than the input data. This ensures that the latent representation is a compressed representation of the salient features of the input. But how small should it be?
Ideally, the size of the hidden layer should balance between being: