Chapter 13. Generating Images with Autoencoders
In Chapter 5 we explored how we can generate text in the style of an existing corpus, whether the works of Shakespeare or code from the Python standard library, while in Chapter 12 we looked at generating images by optimizing the activation of channels in a pretrained network. In this chapter we combine those techniques and build on them to generate images based on examples.
Generating images based on examples is an area of active research where new ideas and breakthroughs are reported on a monthly basis. The state-of-the-art algorithms, however, are beyond the scope of this book in terms of model complexity, training time, and data needed. Instead, we’ll be working in a somewhat restricted domain: hand-drawn sketches.
We’ll start with looking at Google’s Quick Draw data set. This is the result of an online drawing game and contains many hand-drawn pictures. The drawings are stored in a vector format, so we’ll convert them to bitmaps. We’ll pick sketches with one label: cats.
Based on these cat sketches, we’ll build an autoencoder model that is capable of learning catness—it can convert a cat drawing into an internal representation and then generate something similar-looking from that internal representation. We’ll look at visualizing the performance of this network on our cats first.
We’ll then switch to a dataset of hand-drawn digits and then move on to variational autoencoders. These networks produce dense spaces that are an abstract ...
Get Deep Learning Cookbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.