In the previous chapter we looked at generating hand-drawn sketches from the Quick Draw project and digits from the MNIST dataset. In this chapter we’ll try three types of networks on a slightly more challenging task: generating icons.
Before we can do any generating we need to get our hands on a set of icons. Searching online for “free icons” results in a lot of hits. None of these are “free as in speech” and most of them struggle where it comes to “free as in beer.” Also, you can’t freely reuse the icons, and usually the sites strongly suggest you pay for them after all. So, we’ll start with how to download, extract, and process icons into a standard format that we can use in the rest of the chapter.
The first thing we’ll try is to train a conditional variational autoencoder on our set of icons. We’ll use the network we ended up with in the previous chapter as a basis, but we’ll add some convolutional layers to it to make it perform better since the icon space is so much more complex than that of hand-drawn digits.
The second type of network we’ll try is a generative adversarial network. Here we’ll train two networks, one to generate icons and another to distinguish between generated icons and real icons. The competition between the two leads to better results.
The third and final type of network we’ll try is an RNN. In Chapter 5 we used this to generate texts in a certain style. By reinterpreting icons as a set of drawing instructions, ...