GANs are easy to train, but difficult to optimize due to a number of unstable dynamics in their training processes. To train a GAN, we train the generator on sub samples of a high-dimensional training distribution; since this does not innately exist, we initially sample from a standard normal (Gaussian) distribution.
Both the generator and the discriminator are trained jointly in a minimax game using an objective function, also referred to as the minimax function:
Let's break this down a bit. The function is telling us what happens where. Let's look at the initial bit of the first expression:
The notation means expectation, ...