Based on the restrictions imposed on the loss, AEs can be grouped into the following types:
- Plain Vanilla AEs: This is the simplest AE architecture possible, with a fully-connected neural layer as the encoder and decoder.
- Sparse AEs: Sparse AEs are an alternative method for introducing an information bottleneck, without requiring a reduction in the number of nodes in our hidden layers. Rather than preferring an undercomplete AE, the loss function is constructed in a way that it penalizes the activations within a layer. For any given observation, the network is encouraged to learn encoding and decoding, which only relies on activating a small number of neurons.
- Denoising AEs: This is a type of overcomplete ...