Publication Details
Self-supervised speaker embeddings
Rohdin Johan Andréas, M.Sc., Ph.D. (DCGM)
Plchot Oldřich, Ing., Ph.D. (DCGM)
MIZERA, P.
Burget Lukáš, doc. Ing., Ph.D. (DCGM)
speaker recognition, self-supervised learning,deep learning
Contrary to i-vectors, speaker embeddings such as x-vectors areincapable of leveraging unlabelled utterances, due to the classificationloss over training speakers. In this paper, we explorean alternative training strategy to enable the use of unlabelledutterances in training. We propose to train speaker embeddingextractors via reconstructing the frames of a target speech segment,given the inferred embedding of another speech segmentof the same utterance. We do this by attaching to the standardspeaker embedding extractor a decoder network, which we feednot merely with the speaker embedding, but also with the estimatedphone sequence of the target frame sequence.The reconstruction loss can be used either as a single objective,or be combined with the standard speaker classificationloss. In the latter case, it acts as a regularizer, encouraging generalizabilityto speakers unseen during training. In all cases, theproposed architectures are trained from scratch and in an endto-end fashion. We demonstrate the benefits from the proposedapproach on the VoxCeleb and Speakers in the Wild Databases,and we report notable improvements over the baseline.
@inproceedings{BUT159999,
author="STAFYLAKIS, T. and ROHDIN, J. and PLCHOT, O. and MIZERA, P. and BURGET, L.",
title="Self-supervised speaker embeddings",
booktitle="Proceedings of Interspeech",
year="2019",
journal="Proceedings of Interspeech",
volume="2019",
number="9",
pages="2863--2867",
publisher="International Speech Communication Association",
address="Graz",
doi="10.21437/Interspeech.2019-2842",
issn="1990-9772",
url="https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2842.pdf"
}