Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding
Venue
IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23 (2015), pp. 530-539
Publication Year
2015
Authors
Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tür, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, Geoffrey Zweig
BibTeX
Abstract
Semantic slot filling is one of the most challenging problems in spoken language
understanding (SLU). In this paper, we propose to use recurrent neural networks
(RNNs) for this task, and present several novel architectures designed to
efficiently model past and future temporal dependencies. Specifically, we
implemented and compared several important RNN architectures, including Elman,
Jordan, and hybrid variants. To facilitate reproducibility, we implemented these
networks with the publicly available Theano neural network toolkit and completed
experiments on the well-known airline travel information system (ATIS) benchmark.
In addition, we compared the approaches on two custom SLU data sets from the
entertainment and movies domains. Our results show that the RNN-based models
outperform the conditional random field (CRF) baseline by 2% in absolute error
reduction on the ATIS benchmark. We improve the state-of-the-art by 0.5% in the
Entertainment domain, and 6.7% for the movies domain.
