Training instability refers to the weight updates that occur in the GAN optimization process. It is believed that a few factors contribute to instability in weight updates, including:
- Sparse gradients
- Disjoint support between fake images and real images
Non-linearities such as ReLU and Max Pooling produce sparse gradients that can make training unstable. We will propose solutions in this chapter on how to avoid sparse gradients in GANs. For now, let's investigate disjoint support.
In the original GAN setup, we optimize the discriminator by learning a decision boundary that separates real data from fake data. If the support of the fake images does not overlap with the real images, the discriminator can perfectly differentiate ...