Implementing variational autoencoders

In Chapter 4, Implementing Autoencoders with Keras, we learned about autoencoders. We know that an autoencoder learns to represent input data in a latent feature space of reduced dimensions. It learns an arbitrary function to express input data in a compressed latent representation. A variational autoencoder (VAE), instead of learning an arbitrary function, learns the parameters of the probability distribution of the compressed representation. If we can sample points from this distribution, we can generate new data. A VAE consists of an encoder network and a decoder network.

The structure of a VAE is illustrated in the following diagram:

Let's understand the roles of encoder and decoder networks in ...

Get Deep Learning with R Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.