Skip to Content
Math and Architectures of Deep Learning
book

Math and Architectures of Deep Learning

by Krishnendu Chaudhury
May 2024
Intermediate to advanced content levelIntermediate to advanced
552 pages
18h 3m
English
Manning Publications
Content preview from Math and Architectures of Deep Learning

14 Latent space and generative modeling, autoencoders, and variational autoencoders

This chapter covers

  • Representing inputs with latent vectors
  • Geometrical view, smoothness, continuity, and regularization for latent spaces
  • PCA and linear latent spaces
  • Autoencoders and reconstruction loss
  • Variational autoencoders (VAEs) and regularizing latent spaces

Mapping input vectors to a transformed space is often beneficial in machine learning. The transformed vector is called a latent vector—latent because it is not directly observable—while the input is the underlying observed vector. The latent vector (aka embedding) is a simpler representation of the input vector where only features that help accomplish the ultimate goal (such as estimating the probability ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Generative Deep Learning, 2nd Edition

Generative Deep Learning, 2nd Edition

David Foster
Math for Deep Learning

Math for Deep Learning

Ronald T. Kneusel

Publisher Resources

ISBN: 9781617296482Supplemental ContentPublisher SupportOtherPublisher WebsiteSupplemental ContentPurchase Link