Chapter 10. Generative Models
Generative models attempt to understand the latent, or underlying, process that produces the data we see. For example, when breaking down images of digits in the MNIST dataset, we can interpret some attributes of the underlying process generating each image as the digit itself (a discrete variable ranging from zero through nine), the orientation or angle at which it will be drawn, the size of the resulting image, the thickness of the lines, and some noise component (all of which are continuous variables). So far, weâve been concerned with discriminative models, either in the regression or classification setting. In the classification setting, discriminative models take as input an example such as an image from the MNIST dataset and attempt to determine the most likely digit category, from zero through nine, that the input belongs to. Generative models instead attempt to fully model the data distribution, and in the process may implicitly try to learn some of the features mentioned previously to generate images that look as if they were originally from the MNIST dataset. Note that generative modeling is a harder problem than discriminative modeling, as a discriminative model may, for example, need to learn only a few features well to distinguish between different digits in the MNIST dataset to a satisfactory degree. Generative models come in many varieties, and in this chapter, we provide a glimpse into a vast research landscape that has begun to ...
Get Fundamentals of Deep Learning, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.