Variational autoencoder (VAE) are inspired by the concept of Autoencoder: a model consisting of two neural networks called encoders and decoders. As we have seen, the encoder network tries to code its input in a compressed form, while the network decoder tries to reconstruct the initial input, starting from the code returned by the encoder.
However, the functioning of the VAE is very different than that of simple autoencoders. VAEs allow not only coding/decoding of input but also generating new data. To do this, they treat both the code z and the reconstruction/generation x' as if they belonged to a certain probability distribution. In particular, the VAEs are the result of the combination of deep learning and Bayesian ...