Let's now see VAE in action. In this example code, we'll be using the standard MNIST dataset and train a VAE to generate handwritten digits. Since the MNIST dataset is simple, the encoder and decoder network will consist of only fully connected layers; this will allow us to concentrate on the VAE architecture. If you plan to generate complex images (such as CIFAR-10), you'll need to modify the encoder and decoder network to convolution and deconvolution networks:
- The first step as in all previous cases is to import all of the necessary modules. Here, we'll use the TensorFlow higher API, tf.contrib, to make the fully connected layers. Note that this saves us from the hassle of declaring weights and biases for each layer ...