In this paper we present a pruning algorithm and experimental results for our
recently proposed Sparse Non-negative Matrix (SNM) family of language models (LMs).
We have uncovered a bug in the experimental setup for SNM pruning; see Errata
section for correct results. We also illustrate a method for converting an SNMLM to
ARPA back-off format which can be readily used in a single-pass decoder for
Automatic Speech Recognition.