Publication Details
Improving Noise Robustness of Automatic Speech Recognition via Parallel Data and Teacher-student Learning
WU, M.
RAJU, A.
PARTHASARATHI, S.
KUMATANI, K.
SUNDARAM, S.
MAAS, R.
HOFFMEISTER, B.
automatic speech recognition, noise robustness, teacher-student training, domain
adaptation
For real-world speech recognition applications, noise robustness is still
a challenge. In this work, we adopt the teacherstudent (T/S) learning technique
using a parallel clean and noisy corpus for improving automatic speech
recognition (ASR) performance under multimedia noise. On top of that, we apply
a logits selection method which only preserves the k highest values to prevent
wrong emphasis of knowledge from the teacher and to reduce bandwidth needed for
transferring data. We incorporate up to 8000 hours of untranscribed data for
training and present our results on sequence trained models apart from cross
entropy trained ones. The best sequence trained student model yields relative
word error rate (WER) reductions of approximately 10.1%, 28.7% and 19.6% on our
clean, simulated noisy and real test sets respectively comparing to a sequence
trained teacher.
@inproceedings{BUT160006,
author="MOŠNER, L. and WU, M. and RAJU, A. and PARTHASARATHI, S. and KUMATANI, K. and SUNDARAM, S. and MAAS, R. and HOFFMEISTER, B.",
title="Improving Noise Robustness of Automatic Speech Recognition via Parallel Data and Teacher-student Learning",
booktitle="Proceedings of ICASSP",
year="2019",
pages="6475--6479",
publisher="IEEE Signal Processing Society",
address="Brighton",
doi="10.1109/ICASSP.2019.8683422",
isbn="978-1-5386-4658-8",
url="https://ieeexplore.ieee.org/document/8683422"
}