Training the generator

We'll train the generator by making it better at deceiving the discriminator. To do this, we'll need both networks, similar to the way we trained the discriminator with fake samples:

  1. We start with a random latent vector, z, and feed it through both the generator and discriminator to produce the output, .
  2. The loss function is the same as the discriminator loss. However, our goal here is to maximize rather than minimize it, since we want to deceive the discriminator.
  3. In the backward pass, the discriminator weights, , are ...

Get Advanced Deep Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.