Jump to Content

Monotonic Chunkwise Attention

Chung-Cheng Chiu
Colin Raffel
ICLR (2018)

Abstract

Sequence-to-sequence models with an attention have been successfully applied to a wide variety of problems. Standard soft attention makes a pass over the entire input sequence when producing each element of the output sequence, which unfortunately results in a quadratic time complexity and prevents its use in online/“real- time” settings. To address these issues, we propose Monotonic Chunkwise Attention (MoChA), which adaptively splits the input sequence into small chunks over which soft attention is computed. We show that models utilizing MoChA can be trained efficiently with standard backpropagation while allowing online and linear-time decoding at test time. When applied to online speech recognition, we obtain state-of-the-art results and match the performance of an offline soft attention decoder. In document summarization experiments where we do not expect monotonic alignments, we show significantly improved performance compared to a baseline monotonic attention model.

Research Areas