Publication Details
On the Usage of Phonetic Information for Text-independent Speaker Embedding Extraction
Rohdin Johan Andréas, M.Sc., Ph.D. (DCGM)
Burget Lukáš, doc. Ing., Ph.D. (DCGM)
Plchot Oldřich, Ing., Ph.D. (DCGM)
Qian Yanmin
YU, K.
Černocký Jan, prof. Dr. Ing. (DCGM)
phonetic information, text-independent speakerverification, adversarial training
Embeddings extracted by deep neural networks have becomethe state-of-the-art utterance representation in speakerrecognition systems. It has recently been shown that incorporatingframe-level phonetic information in the embedding extractorcan improve the speaker recognition performance. On theother hand, in the final embedding, phonetic information is justan additional source of session variability which may be harmfulto the text-independent speaker recognition task. This suggeststhat at the embedding level phonetic information shouldbe suppressed rather than encouraged. To verify this hypothesis,we perform several experiments that encourage or/and suppressphonetic information at various stages in the network. Ourexperiments confirm that multitask learning is beneficial if itis applied at the frame-level stage of the network, whereas adversarialtraining is beneficial if it is used at the segment-levelstage of the network. Additionally, the combination of thesetwo approaches improves the performance further, resulting inan equal error rate of 3.17% on the VoxCeleb dataset.
@inproceedings{BUT159994,
author="WANG, S. and ROHDIN, J. and BURGET, L. and PLCHOT, O. and QIAN, Y. and YU, K. and ČERNOCKÝ, J.",
title="On the Usage of Phonetic Information for Text-independent Speaker Embedding Extraction",
booktitle="Proceedings of Interspeech",
year="2019",
journal="Proceedings of Interspeech",
volume="2019",
number="9",
pages="1148--1152",
publisher="International Speech Communication Association",
address="Graz",
doi="10.21437/Interspeech.2019-3036",
issn="1990-9772",
url="https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3036.pdf"
}