Summary
We have seen in this chapter two of the most powerful techniques at the core of many practical deep learning implementations: autoencoders and restricted Boltzmann machines.
For both of them, we started with the shallow example of one hidden layer, and we explored how we can stack them together to form a deep neural network able to automatically learn high-level and hierarchical features without requiring explicit human knowledge.
They both serve similar purposes, but there is a little substantial difference.
Autoencoders can be seen as a compression filter that we use to compress the data in order to preserve only the most informative part of it and be able to deterministically reconstruct an approximation of the original data. Autoencoders ...