Variational autoencoders

Very deep autoencoders are hard to train and are prone to overfitting. There have been many developments that have improved how autoencoders are trained, such as generative pretraining with a Restricted Boltzmann Machine (RBM). The variational autoencoders (VAEs) are also generative models, and compared to other deep generative models, VAEs are computationally tractable and stable and can be estimated by the efficient backpropagation algorithm. They are inspired by the idea of variational inference in Bayesian analysis.

The idea of variational inference is as follows: given input distribution x, the posterior probability distribution over output y is too complicated to work with. So, let's approximate that complicated ...

Get Hands-On Transfer Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.