Publication Details
Speech-Based Emotion Recognition with Self-Supervised Models Using Attentive Channel-Wise Correlations and Label Smoothing
emotion recognition, self-supervised features, iemocap, hubert, wavlm, wav2vec
2.0
When recognizing emotions from speech, we encounter two common problems: how to
optimally capture emotion- relevant information from the speech signal and how to
best quantify or categorize the noisy subjective emotion labels. Self-supervised
pre-trained representations can robustly capture information from speech enabling
state-of-the-art results in many downstream tasks including emotion recognition.
However, better ways of aggregating the information across time need to be
considered as the relevant emotion information is likely to appear piecewise and
not uniformly across the signal. For the labels, we need to take into account
that there is a substantial degree of noise that comes from the subjective human
annotations. In this paper, we propose a novel approach to attentive pooling
based on correlations between the representations' coefficients combined with
label smoothing, a method aiming to reduce the confidence of the classifier on
the training labels. We evaluate our proposed approach on the benchmark dataset
IEMOCAP, and demonstrate high performance surpassing that in the literature. The
code to reproduce the results is available at
github.com/skakouros/s3prl_attentive_correlation.
@inproceedings{BUT185201,
author="KAKOUROS, S. and STAFYLAKIS, T. and MOŠNER, L. and BURGET, L.",
title="Speech-Based Emotion Recognition with Self-Supervised Models Using Attentive Channel-Wise Correlations and Label Smoothing",
booktitle="Proceedings of ICASSP 2023",
year="2023",
pages="1--5",
publisher="IEEE Signal Processing Society",
address="Rhodes Island",
doi="10.1109/ICASSP49357.2023.10094673",
isbn="978-1-7281-6327-7",
url="https://ieeexplore.ieee.org/document/10094673"
}