Publication Details
Semi-supervised Training of Deep Neural Networks
semi-supervised training, self-training, deep network, DNN, Babel program
Our quest in this paper is to search for an optimal dataselection strategy for the semi-supervised DNN training. We performed an analysis at all the three stages of DNN training.
In this paper we search for an optimal strategy for semisupervised Deep Neural Network (DNN) training. We assume that a small part of the data is transcribed, while the majority of the data is untranscribed. We explore self-training strategies with data selection based on both the utterance-level and frame-level confidences. Further on, we study the interactions between semi-supervised frame-discriminative training and sequence-discriminative sMBR training. We found it beneficial to reduce the disproportion in amounts of transcribed and untranscribed data by including the transcribed data several times, as well as to do a frame-selection based on per-frame confidences derived from confusion in a lattice. For the experiments, we used the Limited language pack condition for the Surprise language task (Vietnamese) from the IARPA Babel program. The absolute Word Error Rate (WER) improvement for frame cross-entropy training is 2.2%, this corresponds to WER recovery of 36% when compared to the identical system, where the DNN is built on the fully transcribed data.
@inproceedings{BUT105976,
author="Karel {Veselý} and Mirko {Hannemann} and Lukáš {Burget}",
title="Semi-supervised Training of Deep Neural Networks",
booktitle="Proceedings of ASRU 2013",
year="2013",
pages="267--272",
publisher="IEEE Signal Processing Society",
address="Olomouc",
isbn="978-1-4799-2755-5",
url="http://www.fit.vutbr.cz/research/groups/speech/publi/2013/vesely_asru2013_0000267.pdf"
}