Publication Details
A Fast Re-scoring Strategy to Capture Long-Distance Dependencies
Mikolov Tomáš, Ing., Ph.D.
Church Kenneth
language model, re-scoring strategy, recurrent neural network
The paper describes novel approach to lattice rescoring with complex lanaguage models with long-distance dependencies, such as recurrent neural network language models.
A re-scoring strategy is proposed that makes it feasible to capture more long-distance dependencies in the natural language. Two pass strategies have become popular in a number of recognition tasks such as ASR (automatic speech recognition), MT (machine translation) and OCR (optical character recognition). The first pass typically applies a weak language model (n-grams) to a lattice and the second pass applies a stronger language model to N-best lists. The stronger language model is intended to capture more longdistance dependencies. The proposed method uses RNN-LM (recurrent neural network language model), which is a long span LM, to rescore word lattices in the second pass. A hill climbing method (iterative decoding) is proposed to search over islands of confusability in the word lattice. An evaluation based on Broadcast News shows speedups of 20 over basic N-best re-scoring, and word error rate reduction of 8% (relative) on a highly competitive setup.
@inproceedings{BUT76392,
author="Anoop {Deoras} and Tomáš {Mikolov} and Kenneth {Church}",
title="A Fast Re-scoring Strategy to Capture Long-Distance Dependencies",
booktitle="Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing July 2011 Edinburgh, Scotland, UK",
year="2011",
pages="1116--1127",
publisher="Association for Computational Linguistics",
address="Edinburgh",
isbn="978-1-937284-11-4",
url="https://www.fit.vut.cz/research/publication/9687/"
}