Publication Details
Speaker embeddings by modeling channel-wise correlations
speaker recognition, style-transfer, deep learning
Speaker embeddings extracted with deep 2D convolutional neuralnetworks are typically modeled as projections of first andsecond order statistics of channel-frequency pairs onto a linearlayer, using either average or attentive pooling along the timeaxis. In this paper we examine an alternative pooling method,where pairwise correlations between channels for given frequenciesare used as statistics. The method is inspired bystyle-transfer methods in computer vision, where the style ofan image, modeled by the matrix of channel-wise correlations,is transferred to another image, in order to produce a new imagehaving the style of the first and the content of the second.By drawing analogies between image style and speaker characteristics,and between image content and phonetic sequence,we explore the use of such channel-wise correlations featuresto train a ResNet architecture in an end-to-end fashion. Ourexperiments on VoxCeleb demonstrate the effectiveness of theproposed pooling method in speaker recognition.
@inproceedings{BUT175834,
author="Themos {Stafylakis} and Johan Andréas {Rohdin} and Lukáš {Burget}",
title="Speaker embeddings by modeling channel-wise correlations",
booktitle="Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH",
year="2021",
journal="Proceedings of Interspeech",
volume="2021",
number="8",
pages="501--505",
publisher="International Speech Communication Association",
address="Brno",
doi="10.21437/Interspeech.2021-1442",
issn="1990-9772",
url="https://www.isca-speech.org/archive/interspeech_2021/stafylakis21_interspeech.html"
}