Order matters: Sequence to sequence for sets
Venue
International Conference on Learning Representations (ICLR) (2016) (to appear)
Publication Year
2016
Authors
Oriol Vinyals, Samy Bengio, Manjunath Kudlur
BibTeX
Abstract
Sequences have become first class citizens in supervised learning thanks to the
resurgence of recurrent neural networks. Many complex tasks that require mapping
from or to a sequence of observations can now be formulated with the
sequence-to-sequence (seq2seq) framework which employs the chain rule to
efficiently represent the joint probability of sequences. In many cases, however,
variable sized inputs and/or outputs might not be naturally expressed as sequences.
For instance, it is not clear how to input a set of numbers into a model where the
task is to sort them; similarly, we do not know how to organize outputs when they
correspond to random variables and the task is to model their unknown joint
probability. In this paper, we first show using various examples that the order in
which we organize input and/or output data matters significantly when learning an
underlying model. We then discuss an extension of the seq2seq framework that goes
beyond sequences and handles input sets in a principled way. In addition, we
propose a loss which, by searching over possible orders during training, deals with
the lack of structure of output sets. We show empirical evidence of our claims
regarding ordering, and on the modifications to the seq2seq framework on benchmark
language modeling and parsing tasks, as well as two artificial tasks - sorting
numbers and estimating the joint probability of unknown graphical models.
