The experiment focuses on showing numerical properties of fake MNIST samples and features therein, unknown to the naked eye, which can be used to identify them as produced by a GAN. We start by comparing the distribution of features computed over the MNIST training set to other datasets, including the MNIST test set, samples generated with the Least-Squares GAN (LSGAN) and the Improved Wasserstein GAN (IWGAN), and adversarial samples computed using the Fast Gradient Sign Method (FGSM). The training data is scaled to [0, 1] and the random baseline is sampled from a Bernoulli distribution with probability equal to the mean value of pixel intensities in the MNIST training data, 0.13. Each GAN model is trained until the loss plateaus and ...

Get Hands-On Generative Adversarial Networks with Keras now with the O’Reilly learning platform.

O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.