Jump to Content

Minimally Supervised Number Normalization

Transactions of the Association for Computational Linguistics, vol. 4 (2016), pp. 507-519

Abstract

We propose two models for verbalizing numbers, a key component in speech recognition and synthesis systems. The first model uses an end-to-end recurrent neural network. The second model, drawing inspiration from the linguistics literature, uses finite-state transducers constructed with a minimal amount of training data. While both models achieve near-perfect performance, the latter model can be trained using several orders of magnitude less data than the former, making it particularly useful for low-resource languages.