14 Latent space and generative modeling, autoencoders, and variational autoencoders

This chapter covers

  • Representing inputs with latent vectors
  • Geometrical view, smoothness, continuity, and regularization for latent spaces
  • PCA and linear latent spaces
  • Autoencoders and reconstruction loss
  • Variational autoencoders (VAEs) and regularizing latent spaces

Mapping input vectors to a transformed space is often beneficial in machine learning. The transformed vector is called a latent vector—latent because it is not directly observable—while the input is the underlying observed vector. The latent vector (aka embedding) is a simpler representation of the input vector where only features that help accomplish the ultimate goal (such as estimating the probability ...

Get Math and Architectures of Deep Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.