Projects
> Deep Learning Practicum
> Generative Adversarial Networks
October 15, 2018
Overview
Assignment 5 - Generative Adversarial Networks (Instructions)
Section 1.2: Changing the Model
GAN Playground/Model Builder: here
- The Generator needs the second FC layer to transform the shape [256] output of the first FC layer because [256] cannot be reshaped to [28,28,1]. The Generator needs a final output of [28,28,1], therefore the generator reshapes the array into [256] via 1st FC Layer, [784] via 2nd FC Layer, and [28,28,1] via Reshape Layer. It is also not possible to reshape [600] to [20,20,2], but it IS possible to reshape [800] to [20,10,4].
-
I experimented with the default fully-connected and convolutional models by training both models with ~250000 training examples. At ~250000 the default fully-connected model was unable to train the generator to generate images that looked vaguely like digits. The generated images stayed pixelated and somewhat random. In contrast, the default convolutional model was able to generate images that started taking the shape of images digits after ~250000 training examples.
Default Fully-Connected Model
Default Convolutional Model
- The performance of the system's default fully connected model was worse than the performance of the default convolutional model, as shown in the observations and screenshots provided in the question above.
Section 1.3: Exploring with the GAN Playground
-
I experimented with varying the discriminator and generator models in the following ways: making the generator learning rate greater than the discriminator learning rate, making the discriminator learning rate greater than the generator learning rate, and changing the optimizer to Adam instead of simple gradient descent.
-
Generator learning rate greater (0.2) than discriminator learning rate (0.01):
Generator learning rate: 0.2
Discriminator learning rate: 0.01 -
Discriminator learning rate greater (0.2) than generator learning rate (0.01):
The discriminator way overpowered the generator. At only ~17500 training images, 100% of the generator images were classified as false images by the discriminator, and the generator never got the chance to create potentially viable images. The generated images stayed random. The higher learning rate of the discriminator prevented the generator from learning, and enabled the discriminator to classify 100% of generator images as fake images.Generator learning rate: 0.01
Discriminator learning rate: 0.2 -
Using the Adam optimizer for both discriminator and generator:
Changing the optimizer for discriminator from simple gradient descent to Adam enabled the discriminator to perform better. The average discriminator cost is larger when using Adam compared to using simple gradient descent, meaning the discriminator tends to lean towards detecting fake images as real.Discriminator Adam Optimizer
-
Generator learning rate greater (0.2) than discriminator learning rate (0.01):
- For models using a convolutional model, and where the generator learning rate was greater than or equal to the discriminator learning rate, these models were able to generate images that looked vaguely like MNIST digits! Most of the models were trained with 270000-350000 training examples, and the default model was more able to generate most of the MNIST digits (0-9), whereas the models with the slightly increased generator learning rate or using the adam optimizer, tended to produce MNIST digits 3, 8, and 9.
- When the discriminator learning rate was greater than the generator learning rate, the discriminator prevented the generator from learning. The generator was only able to continue producing random images and the discriminator was able to classify fake images correctly 100% of the time. When the generator learning rate was greater than the discriminator learning rate, the generator was able to more quickly learn and start producing images that looked like MNIST digits with less training examples.
-
I built a generator and discriminator model for CIFAR 10 using only fully connected layers, and trained the model for ~30,000 test images for ~15 minutes. With only 15 minutes of training and a small test set, the generator only produced random images and the discriminator was able to predict fake images correctly 100% of the time.
CIFAR 10 Fully-Connected Model