Projects

> Deep Learning Practicum > Recurrent Neural Networks

October 29, 2018

Overview

Assignment 7 - Recurrent Neural Networks (Instructions)

Section 1: Understanding LSTMs

Understanding LSTM Networks - Blog Post

  1. At a high level, the two pieces of information passed between modules are the cell state and the filtered cell state.
  2. RNNs can learn to use recent information to perform a task if the gap between the relevant information and the place that it’s needed is small in the neural net. However, as that gap grows, RNNs become unable to learn to connect the information. Therefore LSTMs, or "Long Short Term Memory", are designed to solve this problem by remembering past relevant information by explictly forgetting or remembering information, filtering the updated data, and then passing the processed information between iterations.
  3. Viewing RNNs as being "unrolled", they pass processed state information between modules. The values passed between each module is different depending on the weights of each module.
  4. We use a sigmoid non-linearity instead of the ReLU nonlinearity because the sigmoid's output range is squashed between 0 and 1, keeping the outputs contained so that the network doesn't get infinitely growing values at each iteration.

Section 2.1: Complete the Code

RecurrentJS - Documentation
Character Demo - GitHub Code

character_demo.html

Section 2.2: Run the Code

Deep Recurrent Nets Character Generation - Demo

  1. I let the model run for 15 epochs. The network is able to create sentences with word lengths that look like words from far away, however on closer inspection many words still have very jumbled spelling or are not words at all. This shows that the network is creating sentences letter by letter rather than word by word.
  2. Setting the softmax to 0.28 (lowering the softmax temperature) creates sentences with more real/common words, however the words that are used gets super repetitive. Setting the softmax to 3.98 (increasing the softmax temperature) causes the network to start including more random punctuation into words, thereby creating less real words.

    Softmax 0.28

    Softmax 3.98

  3. The sentence lengths of the sentences generated are a mix between the input sentence lengths of Dr.Suess (short) and Adele (long). However this does not mean that all the generated sentences are medium length. Instead, there are sentences that are specifically short, and other sentences that are specifically long. It seems that short sentences use shorter word lengths similar to Dr. Suess, and longer sentences use longer word lengths similar to Adele.

    Short sentences: 2 & 3; Long sentences: 1, 4, & 5

  4. Changing the model parameters and initialization:
    • Changing the generator from LSTM to RNN, the network is unable to generate sentences at all, with the complexity remaining at infinity, because the RNN is unable to learn from iterations that are too far back.

      Initial learning rate: 0.1

    • Changing the learning rate to 0.1 and training until epoch 15, the perplexity (~30-80) remains too high for the network to generate any meaningful words, meaning that the step size for the learning rate is too large. At a learning rate of 0.05, the perplexity is lowered go ~15, and the network is able to start generating phrases of potentially meaningful word lengths, however using a learning rate of 0.05 still does not produce as good results as a learning rate of 0.01.

      Initial learning rate: 0.05

Section 2.3: Run with new data

Kaggle song lyrics - Disney.txt

I substituted Adele lyrics for Disney lyrics, and trained the network using Dr.Suess and Disney lyrics at softmax temperature 0.56 and learning rate 0.01. Training with Disney lyrics instead of Adele lyrics seems to be a little bit harder, because even though I trained the network for 20 epochs, the words and sentences generated are still very garbled. However, the words that the network seems to want to generate definitely seem more childish and references many words from nature, animals, and boy/girls.

Network trained with Disney and Dr.Suess lyrics after 20 epochs

Section 3: Sketch RNN

Interesting Applications to RNNs:

  • Draw Together with a Neural Network (link) - Magenta
  • Humor Generation with Recurrent Neural Networks (link) - Train a network to generate jokes!
  • RNN Generated Playing Cards (link) - Hilarious monster damange summaries!