Publication Details
Parameter-Efficient Tuning With Adaptive Bottlenecks For Automatic Speech Recognition
Prasad Amrutha (DCGM)
KHALIL, D.
Madikeri Srikanth
DEMUYNCK, K.
Motlíček Petr, doc. Ing., Ph.D. (DCGM)
ASR, XLSR, Adapters, ATC
Transfer learning from large multilingual pretrained models,
like XLSR, has become the new paradigm for Automatic
Speech Recognition (ASR). Considering their ever-increasing
size, fine-tuning all the weights has become impractical when
the computing budget is limited. Adapters are lightweight
trainable modules inserted between layers while the pretrained
part is kept frozen. They form a parameter-efficient
fine-tuning method, but they still require a large bottleneck
size to match standard fine-tuning performance. In this paper,
we propose ABSADAPTER, a method to further reduce the
parameter budget for equal task performance. Specifically,
ABSADAPTER uses an Adaptive Bottleneck Scheduler to
redistribute the adapter's weights to the layers that need adaptation
the most. By training only 8% of the XLSR model,
ABSADAPTER achieves close to standard fine-tuning performance
on a domain-shifted Air-Traffic Communication
(ATC) ASR task.
@inproceedings{BUT187932,
author="VANDERREYDT, G. and PRASAD, A. and KHALIL, D. and MADIKERI, S. and DEMUYNCK, K. and MOTLÍČEK, P.",
title="Parameter-Efficient Tuning With Adaptive Bottlenecks For Automatic Speech Recognition",
booktitle="Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)",
year="2023",
pages="1--7",
publisher="IEEE Signal Processing Society",
address="Taipei",
doi="10.1109/ASRU57964.2023.10389769",
isbn="979-8-3503-0689-7",
url="https://ieeexplore.ieee.org/document/10389769"
}