Image reconstruction using VAEs

Our first example will use the MNIST data to illustrate the use of Variational Autoencoders. 

The development strategy is as follows:

  • First, an encoder network turns the input samples x, into two parameters in a latent space, which will be denoted z_mean and z_log_sigma
  • Then, we randomly sample similar points z from the latent normal distribution which we assumed is used to generate the data, as z ~ z_mean + exp(z_log_sigma)*epsilon where epsilon is a random normal tensor
  • Once this is done, a decoder network maps these latent space points z back to the original input data

We begin as usual, getting and preprocessing the data:

library(keras)# Switch to the 1-based indexing from Roptions(tensorflow.one_based_extract ...

Get R Deep Learning Projects now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.