Chapter 7. Autoencoders
The first six chapters of this book explored how to use unsupervised learning to perform dimensionality reduction and clustering, and the concepts we covered helped us build applications to detect anomalies and segment groups based on similarity.
However, unsupervised learning is capable of a lot more. One area that unsupervised learning excels in is feature extraction, which is a method used to generate a new feature representation from an original set of features; the new feature representation is called a learned representation and is used to improve performance on supervised learning problems. In other words, feature extraction is an unsupervised means to a supervised end.
Autoencoders are one such form of feature extraction. They use a feedforward, nonrecurrent neural network to perform representation learning. Representation learning is a core part of an entire branch of machine learning involving neural networks.
In autoencoders—which are a form of representation learning—each layer of the neural network learns a representation of the original features, and subsequent layers build on the representation learned by the preceding layers. Layer by layer, the autoencoder learns increasingly complicated representations from simpler ones, building what is known as a hierarchy of concepts that become more and more abstract.
The output layer is the final newly learned representation of the original features. This learned representation can then be used as ...
Get Hands-On Unsupervised Learning Using Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.