Publication Details
Autoencoder based multi-stream combination for noise robust speech recognition
Ogawa Tetsuji
Veselý Karel, Ing., Ph.D. (DCGM)
Nidadavolu Phani (FIT)
Heřmanský Hynek, prof. Ing., Dr. Eng. (DCGM)
speech recognition, human-computer interaction, computational paralinguistics
In this paper, we have proposed techniques to estimate performance of DNN based classifiers. The technique is based on modeling the distribution of DNN outputs, using autoencoders.
Performances of automatic speech recognition (ASR) systems degrade rapidly when there is a mismatch between train and test acoustic conditions. Performance can be improved using a multi-stream framework, which involves combining posterior probabilities from several classifiers (often deep neural networks (DNNs)) trained on different features/streams. Knowledge about the confidence of each of these classifiers on a noisy test utterance can help in devising better techniques for posterior combination than simple sum and product rules [1]. In this work, we propose to use autoencoders which are multilayer feed forward neural networks, for estimating this confidence measure. During the training phase, for each stream, an autocoder is trained on TANDEM features extracted from the corresponding DNN. On employing the autoencoder during the testing phase, we show that the reconstruction error of the autoencoder is correlated to the robustness of the corresponding stream. These error estimates are then used as confidence measures to combine the posterior probabilities generated from each of the streams. Experiments on Aurora4 and BABEL databases indicate significant improvements, especially in the scenario of mismatch between train and test acoustic conditions.
@inproceedings{BUT119906,
author="Sri Harish {Mallidi} and Tetsuji {Ogawa} and Karel {Veselý} and Phani {Nidadavolu} and Hynek {Heřmanský}",
title="Autoencoder based multi-stream combination for noise robust speech recognition",
booktitle="Proceeding of Interspeech 2015",
year="2015",
journal="Proceedings of Interspeech",
volume="2015",
number="09",
pages="3551--3555",
publisher="International Speech Communication Association",
address="Dresden",
isbn="978-1-5108-1790-6",
issn="1990-9772",
url="http://www.fit.vutbr.cz/research/groups/speech/publi/2015/mallidi_interspeech2015_IS150897.pdf"
}