Training instability

Training instability refers to the weight updates that occur in the GAN optimization process. It is believed that a few factors contribute to instability in weight updates, including:

  • Sparse gradients
  • Disjoint support between fake images and real images

Non-linearities such as ReLU and Max Pooling produce sparse gradients that can make training unstable. We will propose solutions in this chapter on how to avoid sparse gradients in GANs. For now, let's investigate disjoint support.

In the original GAN setup, we optimize the discriminator by learning a decision boundary that separates real data from fake data. If the support of the fake images does not overlap with the real images, the discriminator can perfectly differentiate ...

Get Hands-On Generative Adversarial Networks with Keras now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.