Detail výsledku

Continual Unsupervised Domain Adaptation for Audio Deepfake Detection

CHEN, X.; LU, W.; ZHANG, R.; XU, J.; LU, X.; ZHANG, L.; WEI, J. Continual Unsupervised Domain Adaptation for Audio Deepfake Detection. In Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing. Hyderabad, Indická republika: Institute of Electrical and Electronics Engineers Inc., 2025. p. 1-5. ISBN: 979-8-3503-6874-1.
Typ
článek ve sborníku konference
Jazyk
angličtina
Autoři
Chen Xiaohuan
Lu Wenhuan
Zhang Ruiteng
Xu Junhai
Lu Xugang
Zhang Lin, Ph.D.
Wei Jianguo
Abstrakt

Audio deepfake detection (ADD) aims to verify the authenticity of audio. However, its performance declines sharply when facing significant domain discrepancies caused by unknown datasets. Unsupervised domain adaptation (UDA) has been applied to mitigate domain mismatch. However, as generative models evolve, existing UDA methods struggle with catastrophic forgetting when facing continuously emerging spoofing methods. To address this challenge, we introduce continual UDA for ADD, which involves sequentially training across multiple target domains with continual learning. We propose a causality-distillation-based continual domain adversarial training framework for continual UDA, called CD-DAT. Specifically, we employ the domain adversarial training (DAT) framework to learn both spoofing-discriminative and domain-invariant deep features. In addition, we design a continual learning algorithm utilizing causality distillation to capture the mapping between utterances and classes, effectively mitigating forgetting and maintaining generalization. Experiments demonstrated that CD-DAT improved detection performance across all domains, confirming its memory stability and learning plasticity.

Klíčová slova

Audio deepfake detection | causality distillation | continual learning | unsupervised domain adaptation

URL
Rok
2025
Strany
5
Sborník
Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Konference
ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
ISBN
979-8-3503-6874-1
Vydavatel
Institute of Electrical and Electronics Engineers Inc.
Místo
Hyderabad, Indická republika
DOI
EID Scopus
BibTeX
@inproceedings{BUT199990,
  author="{} and  {} and  {} and  {} and  {} and Lin {Zhang} and  {}",
  title="Continual Unsupervised Domain Adaptation for Audio Deepfake Detection",
  booktitle="Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing",
  year="2025",
  pages="5",
  publisher="Institute of Electrical and Electronics Engineers Inc.",
  address="Hyderabad, Indická republika",
  doi="10.1109/ICASSP49660.2025.10890538",
  isbn="979-8-3503-6874-1",
  url="https://ieeexplore.ieee.org/document/10890538"
}
Projekty
Soudobé metody zpracování, analýzy a zobrazování multimediálních a 3D dat, VUT, Vnitřní projekty VUT, FIT-S-23-8278, zahájení: 2023-03-01, ukončení: 2026-02-28, řešení
Výzkumné skupiny
Pracoviště
Nahoru