Publication Details
Self-supervised speaker embeddings
Rohdin Johan Andréas, M.Sc., Ph.D. (DCGM)
Plchot Oldřich, Ing., Ph.D. (DCGM)
MIZERA, P.
Burget Lukáš, doc. Ing., Ph.D. (DCGM)
speaker recognition, self-supervised learning, deep learning
Contrary to i-vectors, speaker embeddings such as x-vectors are incapable of
leveraging unlabelled utterances, due to the classification loss over training
speakers. In this paper, we explore an alternative training strategy to enable
the use of unlabelled utterances in training. We propose to train speaker
embedding extractors via reconstructing the frames of a target speech segment,
given the inferred embedding of another speech segment of the same utterance. We
do this by attaching to the standard speaker embedding extractor a decoder
network, which we feed not merely with the speaker embedding, but also with the
estimated phone sequence of the target frame sequence. The reconstruction loss
can be used either as a single objective, or be combined with the standard
speaker classification loss. In the latter case, it acts as a regularizer,
encouraging generalizability to speakers unseen during training. In all cases,
the proposed architectures are trained from scratch and in an endto- end fashion.
We demonstrate the benefits from the proposed approach on the VoxCeleb and
Speakers in the Wild Databases, and we report notable improvements over the
baseline.
@inproceedings{BUT159999,
author="STAFYLAKIS, T. and ROHDIN, J. and PLCHOT, O. and MIZERA, P. and BURGET, L.",
title="Self-supervised speaker embeddings",
booktitle="Proceedings of Interspeech",
year="2019",
journal="Proceedings of Interspeech",
volume="2019",
number="9",
pages="2863--2867",
publisher="International Speech Communication Association",
address="Graz",
doi="10.21437/Interspeech.2019-2842",
issn="1990-9772",
url="https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2842.pdf"
}