Publication Details
Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation
Delcroix Marc
Burget Lukáš, doc. Ing., Ph.D. (DCGM)
Nakatani Tomohiro
Černocký Jan, prof. Dr. Ing. (DCGM)
Multi-channel speech separation, variational autoencoder, spatial clustering,
DOLPHIN
In this paper, we propose a method combining variational autoencoder model of
speech with a spatial clustering approach for multichannel speech separation. The
advantage of integrating spatial clustering with a spectral model was shown in
several works. As the spectral model, previous works used either factorial
generative models of the mixed speech or discriminative neural networks. In our
work, we combine the strengths of both approaches, by building a factorial model
based on a generative neural network, a variational autoencoder. By doing so, we
can exploit the modeling power of neural networks, but at the same time, keep
a structured model. Such a model can be advantageous when adapting to new noise
conditions as only the noise part of the model needs to be modified. We show
experimentally, that our model significantly outperforms previous factorial model
based on Gaussian mixture model (DOLPHIN), performs comparably to integration of
permutation invariant training with spatial clustering, and enables us to easily
adapt to new noise conditions.
@inproceedings{BUT175809,
author="Kateřina {Žmolíková} and Marc {Delcroix} and Lukáš {Burget} and Tomohiro {Nakatani} and Jan {Černocký}",
title="Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation",
booktitle="2021 IEEE Spoken Language Technology Workshop, SLT 2021 - Proceedings",
year="2021",
pages="889--896",
publisher="IEEE Signal Processing Society",
address="Shenzhen - virtual",
doi="10.1109/SLT48900.2021.9383612",
isbn="978-1-7281-7066-4",
url="https://ieeexplore.ieee.org/document/9383612"
}