Jump to Content

Pruning Sparse Non-negative Matrix N-gram Language Models

Joris Pelemans
Noam M. Shazeer
Proceedings of Interspeech 2015, ISCA, pp. 1433-1437

Abstract

In this paper we present a pruning algorithm and experimental results for our recently proposed Sparse Non-negative Matrix (SNM) family of language models (LMs). We have uncovered a bug in the experimental setup for SNM pruning; see Errata section for correct results. We also illustrate a method for converting an SNMLM to ARPA back-off format which can be readily used in a single-pass decoder for Automatic Speech Recognition.