Publication Details
Multilingual acoustic modeling for speech recognition based on Subspace Gaussian Mixture Models
Schwarz Petr, Ing., Ph.D. (DCGM)
Agarwal Mohit
Akyazi Pinar
Feng Kai
Ghoshal Arnab
Glembek Ondřej, Ing., Ph.D.
Goel Nagendra
Karafiát Martin, Ing., Ph.D. (DCGM)
Povey Daniel
Rastrow Ariya
Rose Richard
Thomas Samuel
Large vocabulary speech recognition, Subspace Gaussian mixture model, Multilingual acoustic modeling
This paper is on a different approach to multilingual speech recognition, in which the phone sets are entirely distinct but the model has parameters not tied to specific states that are shared across languages.
Although research has previously been done on multilingual speech recognition, it has been found to be very difficult to improve over separately trained systems. The usual approach has been to use some kind of "universal phone set" that covers multiple languages. We report experiments on a different approach to multilingual speech recognition, in which the phone sets are entirely distinct but the model has parameters not tied to specific states that are shared across languages. We use a model called a "Subspace Gaussian Mixture Model" where states' distributions are Gaussian Mixture Models with a common structure, constrained to lie in a subspace of the total parameter space. The parameters that define this subspace can be shared across languages. We obtain substantial WER improvements with this approach, especially with very small amounts of inlanguage training data.
@inproceedings{BUT37044,
author="Lukáš {Burget} and Petr {Schwarz} and Mohit {Agarwal} and Pinar {Akyazi} and Kai {Feng} and Arnab {Ghoshal} and Ondřej {Glembek} and Nagendra {Goel} and Martin {Karafiát} and Daniel {Povey} and Ariya {Rastrow} and Richard {Rose} and Samuel {Thomas}",
title="Multilingual acoustic modeling for speech recognition based on Subspace Gaussian Mixture Models",
booktitle="Proc. International Conference on Acoustictics, Speech, and Signal Processing",
year="2010",
journal="Proc. International Conference on Acoustics, Speech, and Signal Processing",
volume="2010",
number="3",
pages="4334--4337",
publisher="IEEE Signal Processing Society",
address="Dallas",
isbn="978-1-4244-4296-6",
issn="1520-6149",
url="http://www.fit.vutbr.cz/research/groups/speech/publi/2010/burget_icassp2010_4334.pdf"
}