In the train_network function, we first define the optimizers for both the generator and the discriminator loss functions. We use the Adam optimizer for both the generators and the discriminators, since this is an advanced version of the stochastic gradient descent optimizer that works really well in training GANs. Adam uses a decaying average of gradients, much like momentum for steady gradient, and a decaying average of squared gradients that provides information about the curvature of the cost function. The variables pertaining to the different losses defined by tf.summary are written to the log files and can therefore be monitored through TensorBoard. The following is the detailed code for the train function: ...
Building the training process
Get Intelligent Projects Using Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.