Publication Details
Speaker embeddings by modeling channel-wise correlations
speaker recognition, style-transfer, deep learning
Speaker embeddings extracted with deep 2D convolutional neural networks are
typically modeled as projections of first and second order statistics of
channel-frequency pairs onto a linear layer, using either average or attentive
pooling along the time axis. In this paper we examine an alternative pooling
method, where pairwise correlations between channels for given frequencies are
used as statistics. The method is inspired by style-transfer methods in computer
vision, where the style of an image, modeled by the matrix of channel-wise
correlations, is transferred to another image, in order to produce a new image
having the style of the first and the content of the second. By drawing analogies
between image style and speaker characteristics, and between image content and
phonetic sequence, we explore the use of such channel-wise correlations features
to train a ResNet architecture in an end-to-end fashion. Our experiments on
VoxCeleb demonstrate the effectiveness of the proposed pooling method in speaker
recognition.
@inproceedings{BUT175834,
author="Themos {Stafylakis} and Johan Andréas {Rohdin} and Lukáš {Burget}",
title="Speaker embeddings by modeling channel-wise correlations",
booktitle="Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH",
year="2021",
journal="Proceedings of Interspeech",
volume="2021",
number="8",
pages="501--505",
publisher="International Speech Communication Association",
address="Brno",
doi="10.21437/Interspeech.2021-1442",
issn="1990-9772",
url="https://www.isca-speech.org/archive/interspeech_2021/stafylakis21_interspeech.html"
}