Publication Details

Analysis and Optimization of Bottleneck Features for Speaker Recognition

LOZANO DÍEZ, A.; SILNOVA, A.; MATĚJKA, P.; GLEMBEK, O.; PLCHOT, O.; PEŠÁN, J.; BURGET, L.; GONZALEZ-RODRIGUEZ, J. Analysis and Optimization of Bottleneck Features for Speaker Recognition. In Proceedings of Odyssey 2016. Proceedings of Odyssey: The Speaker and Language Recognition Workshop Odyssey 2014, Joensuu, Finland. Bilbao: International Speech Communication Association, 2016. p. 352-357. ISSN: 2312-2846.
Czech title
Analýza a optimalizace bottle-neck parametrů pro rozpoznávání mluvčího
Type
conference paper
Language
English
Authors
Lozano Díez Alicia, Ph.D.
Silnova Anna, M.Sc., Ph.D. (DCGM)
Matějka Pavel, Ing., Ph.D. (DCGM)
Glembek Ondřej, Ing., Ph.D.
Plchot Oldřich, Ing., Ph.D. (DCGM)
Pešán Jan, Ing. (DCGM)
Burget Lukáš, doc. Ing., Ph.D. (DCGM)
Gonzalez-Rodriguez Joaquin (FIT)
URL
Keywords

bottleneck features, speaker recognition

Abstract

Recently, Deep Neural Network (DNN) based bottleneck features proved to be very effective in i-vector based speaker recognition. However, the bottleneck feature extraction is usually fully optimized for speech rather than speaker recognition task. In this paper, we explore whether DNNs suboptimal for speech recognition can provide better bottleneck features for speaker recognition. We experiment with different features optimized for speech or speaker recognition as input to the DNN. We also experiment with under-trained DNN, where the training was interrupted before the full convergence of the speech recognition objective. Moreover, we analyze the effect of normalizing the features at the input and/or at the output of bottleneck features extraction to see how it affects the final speaker recognition system performance. We evaluated the systems in the SRE10, condition 5, female task. Results show that the best configuration of the DNN in terms of phone accuracy does not necessary imply better performance of the final speaker recognition system. Finally, we compare the performance of bottleneck features and the standard MFCC features in i-vector/PLDA speaker recognition system. The best bottleneck features yield up to 37% of relative improvement in terms of EER.

Annotation

In this work, we studied whether not fully optimized networks trained for ASR could provide better bottleneck features for speaker recognition. Then, we analyzed the influence of different aspects (input features, short-term mean and variance normalization, "under-trained" DNNs) when training DNNs to optimize the performance of speaker recognition systems based on bottleneck features.We evaluated the performance of the resulting bottleneck features in the NIST SRE10, condition 5, female task.

Published
2016
Pages
352–357
Journal
Proceedings of Odyssey: The Speaker and Language Recognition Workshop Odyssey 2014, Joensuu, Finland, vol. 2016, no. 06, ISSN 2312-2846
Proceedings
Proceedings of Odyssey 2016
Publisher
International Speech Communication Association
Place
Bilbao
DOI
EID Scopus
BibTeX
@inproceedings{BUT131002,
  author="Alicia {Lozano Díez} and Anna {Silnova} and Pavel {Matějka} and Ondřej {Glembek} and Oldřich {Plchot} and Jan {Pešán} and Lukáš {Burget} and Joaquin {Gonzalez-Rodriguez}",
  title="Analysis and Optimization of Bottleneck Features for Speaker Recognition",
  booktitle="Proceedings of Odyssey 2016",
  year="2016",
  journal="Proceedings of Odyssey: The Speaker and Language Recognition Workshop Odyssey 2014, Joensuu, Finland",
  volume="2016",
  number="06",
  pages="352--357",
  publisher="International Speech Communication Association",
  address="Bilbao",
  doi="10.21437/Odyssey.2016-51",
  issn="2312-2846",
  url="http://www.odyssey2016.org/papers/pdfs_stamped/54.pdf"
}
Files
Back to top