Based on the size of the hidden layer, AEs can be classified into two types, undercomplete AEs and overcomplete AEs:
- Undercomplete AE: If the AE simply learns to copy the input to the output, then it is not useful. The idea is to produce a concise representation as the output of the encoder, and this concise representation should consist of the most useful features of the input. The amount of conciseness achieved by the input layer is governed by the number of neurons or nodes that we use in the latent space representation. This can be set as a parameter while building the AE. If the number of neurons is set to fewer dimensions than that of the input features, then the AE is forced to learn most of the ...