August 2018
Intermediate to advanced
272 pages
7h 2m
English
As mentioned earlier, both the discriminator and generator have their own loss functions that depend on the output of each others networks. We can think of the GAN as playing a minimax game between the discriminator and the generator that looks like the following:

Here, D is our discriminator, G is our generator, z is a random vector input to the generator, and x is a real image. Although we have given the combined GAN loss here, it is actually easier to consider the two optimizations separately.
In order to train the GAN, we will alternate gradient step updates between the discriminator and the generator. When updating the ...