O'Reilly logo

R Deep Learning Projects by Pablo Maldonado, Yuxi Liu

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Image reconstruction using VAEs

Our first example will use the MNIST data to illustrate the use of Variational Autoencoders. 

The development strategy is as follows:

  • First, an encoder network turns the input samples x, into two parameters in a latent space, which will be denoted z_mean and z_log_sigma
  • Then, we randomly sample similar points z from the latent normal distribution which we assumed is used to generate the data, as z ~ z_mean + exp(z_log_sigma)*epsilon where epsilon is a random normal tensor
  • Once this is done, a decoder network maps these latent space points z back to the original input data

We begin as usual, getting and preprocessing the data:

library(keras)# Switch to the 1-based indexing from Roptions(tensorflow.one_based_extract ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required