Pointer Networks

Meire Fortunato
Navdeep Jaitly
NIPS (2015), pp. 2692-2700

Abstract

We introduce a new neural architecture to learn the conditional probability of an
output sequence with elements that are discrete tokens corresponding to positions
in an input sequence. Such problems cannot be trivially addressed by existent approaches
such as sequence-to-sequence [1] and Neural Turing Machines [2], because
the number of target classes in each step of the output depends on the length
of the input, which is variable. Problems such as sorting variable sized sequences,
and various combinatorial optimization problems belong to this class. Our model
solves the problem of variable size output dictionaries using a recently proposed
mechanism of neural attention. It differs from the previous attention attempts in
that, instead of using attention to blend hidden units of an encoder to a context
vector at each decoder step, it uses attention as a pointer to select a member of
the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net).
We show Ptr-Nets can be used to learn approximate solutions to three challenging
geometric problems – finding planar convex hulls, computing Delaunay triangulations,
and the planar Travelling Salesman Problem – using training examples
alone. Ptr-Nets not only improve over sequence-to-sequence with input attention,
but also allow us to generalize to variable size output dictionaries. We show that
the learnt models generalize beyond the maximum lengths they were trained on.
We hope our results on these tasks will encourage a broader exploration of neural
learning for discrete problems.

Research Areas