An empirical exploration of recurrent network architectures
Venue
Journal of Machine Learning Research (2015)
Publication Year
2015
Authors
Rafal Jozefowicz, Wojciech Zaremba, Ilya Sutskever
BibTeX
Abstract
The Recurrent Neural Network (RNN) is an extremely powerful sequence model that is
often difficult to train. The Long Short-Term Memory (LSTM) is a specific RNN
architecture whose design makes it much easier to train. While wildly successful in
practice, the LSTM’s architecture appears to be ad-hoc so it is not clear if it is
optimal, and the significance of its individual components is unclear. In this
work, we aim to determine whether the LSTM architecture is optimal or whether much
better architectures exist. We conducted a thorough architecture search where we
evaluated over ten thousand different RNN architectures, and identified an
architecture that outperforms both the LSTM and the recently-introduced Gated
Recurrent Unit (GRU) on some but not all tasks. We found that adding a bias of 1 to
the LSTM’s forget gate closes the gap between the LSTM and the GRU.
