Large Scale Language Modeling in Automatic Speech Recognition
Venue
Google (2012)
Publication Year
2012
Authors
Ciprian Chelba, Dan Bikel, Maria Shugrina, Patrick Nguyen, Shankar Kumar
BibTeX
Abstract
Large language models have been proven quite beneficial for a variety of automatic
speech recognition tasks in Google. We summarize results on Voice Search and a few
YouTube speech transcription tasks to highlight the impact that one can expect from
increasing both the amount of training data, and the size of the language model
estimated from such data. Depending on the task, availability and amount of
training data used, language model size and amount of work and care put into
integrating them in the lattice rescoring step we observe reductions in word error
rate between 6% and 10% relative, for systems on a wide range of operating points
between 17% and 52% word error rate.
