The variational autoencoder

The variational autoencoder (VAE) is another type of autoencoder, but with some particular differences. In fact, instead of learning functions, f() and g(), it learns the probability density function of the input data.

Let's suppose we have a distribution, pθ, and it is parameterized by θ. Here, we can express the relationship between x and z as follows:

  • pθ(z): The prior
  • pθ(x | z): The likelihood (the distribution of the input given the latent space)
  • pθ(z | x): The posterior (the distribution of the latent space given the input)

The aforementioned distributions are parameterized by neural networks, which enables them to capture complex nonlinearities and, as we know, we train them using gradient descent.

But ...

Get Hands-On Mathematics for Deep Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.