Publication Details
How To Improve Your Speaker Embeddings Extractor in Generic Toolkits
Burget Lukáš, doc. Ing., Ph.D. (DCGM)
Rohdin Johan Andréas, M.Sc., Ph.D. (DCGM)
Stafylakis Themos
Černocký Jan, prof. Dr. Ing. (DCGM)
Deep neural network, speaker embedding, xvector, Tensorflow, Kaldi.
Recently, speaker embeddings extracted with deep neural networks became the state-of-the-art method for speaker verification. In this paper we aim to facilitate its implementation on a more generic toolkit than Kaldi, which we anticipate to enable further improvements on the method. We examine several tricks in training, such as the effects of normalizing input features and pooled statistics, different methods for preventing overfitting as well as alternative nonlinearities that can be used instead of Rectifier Linear Units. In addition, we investigate the difference in performance between TDNN and CNN, and between two types of attention mechanism. Experimental results on Speaker in the Wild, SRE 2016 and SRE 2018 datasets demonstrate the effectiveness of the proposed implementation.
@inproceedings{BUT158087,
author="Hossein {Zeinali} and Lukáš {Burget} and Johan Andréas {Rohdin} and Themos {Stafylakis} and Jan {Černocký}",
title="How To Improve Your Speaker Embeddings Extractor in Generic Toolkits",
booktitle="Proceedings of 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)",
year="2019",
pages="6141--6145",
publisher="IEEE Signal Processing Society",
address="Brighton",
doi="10.1109/ICASSP.2019.8683445",
isbn="978-1-5386-4658-8",
url="https://ieeexplore.ieee.org/abstract/document/8683445"
}