Learning to Execute
Venue
arXiv (2015)
Publication Year
2015
Authors
Wojciech Zaremba, Ilya Sutskever
BibTeX
Abstract
Recurrent Neural Networks (RNNs) with Long Short-Term Memory units (LSTM) are
widely used because they are expressive and are easy to train. Our interest lies in
empirically evaluating the expressiveness and the learnability of LSTMs in the
sequence-to-sequence regime by training them to evaluate short computer programs, a
domain that has traditionally been seen as too complex for neural networks. We
consider a simple class of programs that can be evaluated with a single
left-to-right pass using constant memory. Our main result is that LSTMs can learn
to map the character-level representations of such programs to their correct
outputs. Notably, it was necessary to use curriculum learning, and while
conventional curriculum learning proved ineffective, we developed a new variant of
curriculum learning that improved our networks' performance in all experimental
conditions. The improved curriculum had a dramatic impact on an addition problem,
making it possible to train an LSTM to add two 9-digit numbers with 99% accuracy.
