Chapter 5. Paint
So far, we have explored various ways in which we can train a model to generate new samples, given only a training set of data we wish to imitate. We’ve applied this to several datasets and seen how in each case, VAEs and GANs are able to learn a mapping between an underlying latent space and the original pixel space. By sampling from a distribution in the latent space, we can use the generative model to map this vector to a novel image in the pixel space.
Notice that all of the examples we have seen so far produce novel observations from scratch—that is, there is no input apart from the random latent vector sampled from the latent space that is used to generate the images.
A different application of generative models is in the field of style transfer. Here, our aim is to build a model that can transform an input base image in order to give the impression that it comes from the same collection as a given set of style images. This technique has clear commercial applications and is now being used in computer graphics software, computer game design, and mobile phone applications. Some examples of this are shown in Figure 5-1.
With style transfer, our aim isn’t to model the underlying distribution of the style images, but instead to extract only the stylistic components from these images and embed these into the base image. We clearly cannot just merge the style images with the base image through interpolation, as the content of the style images would show through ...
Get Generative Deep Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.