If you're thinking that the task of reconstructing an output doesn't appear that useful, you're not alone. What exactly do we use these networks for? Autoencoders help to extract features when there are no known labeled features at hand. To illustrate how this works, let's walk through an example using TensorFlow. We're going to reconstruct the MNIST dataset here, and, later on, we will compare the performance of the standard autoencoder against the variational autoencoder in relation to the same task.
Let's get started with our imports and data. MNIST is contained natively within TensorFlow, so we can easily import it:
import tensorflow as tfimport numpy as npfrom tensorflow.examples.tutorials.mnist import input_data ...