Publication Details

HyperConformer: Multi-head HyperMixer for Efficient Speech Recognition

MAI, F.; ZULUAGA-GOMEZ, J.; PARCOLLET, T.; MOTLÍČEK, P. HyperConformer: Multi-head HyperMixer for Efficient Speech Recognition. In Proceedings of the Annual Conference of International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. p. 2213-2217. ISSN: 1990-9772.
Czech title
HyperConformer: HyperMixer s více hlavami pro efektivní rozpozná
Type
conference paper
Language
English
Authors
MAI, F.
ZULUAGA-GOMEZ, J.
PARCOLLET, T.
Motlíček Petr, doc. Ing., Ph.D. (DCGM)
URL
Keywords

Hypernetworks, HyperMixer, Efficient Auto- matic Speech Recognition, LibriSpeech, SpeechBrain

Abstract

State-of-the-art ASR systems have achieved promising results by modeling local and global interactions separately. While the former can be computed efficiently, global interactions are usu- ally modeled via attention mechanisms, which are expensive for long input sequences. Here, we address this by extending Hy- perMixer, an efficient alternative to attention exhibiting linear complexity, to the Conformer architecture for speech recogni- tion, leading to HyperConformer. In particular, multi-head Hy- perConformer achieves comparable or higher recognition per- formance while being more efficient than Conformer in terms of inference speed, memory, parameter count, and available train- ing data. HyperConformer achieves a word error rate of 2.9% on LibriSpeech test-clean with less than 8M neural parameters and a peak memory during training of 5.7GB, hence trainable with accessible hardware. Encoder speed is between 38% on mid-length speech and 56% on long speech faster than an equiv- alent Conformer.1)

Published
2023
Pages
2213–2217
Journal
Proceedings of Interspeech, vol. 2023, no. 08, ISSN 1990-9772
Proceedings
Proceedings of the Annual Conference of International Speech Communication Association, INTERSPEECH
Publisher
International Speech Communication Association
Place
Dublin
DOI
EID Scopus
BibTeX
@inproceedings{BUT187786,
  author="MAI, F. and ZULUAGA-GOMEZ, J. and PARCOLLET, T. and MOTLÍČEK, P.",
  title="HyperConformer: Multi-head HyperMixer for Efficient Speech Recognition",
  booktitle="Proceedings of the Annual Conference of International Speech Communication Association, INTERSPEECH",
  year="2023",
  journal="Proceedings of Interspeech",
  volume="2023",
  number="08",
  pages="2213--2217",
  publisher="International Speech Communication Association",
  address="Dublin",
  doi="10.21437/Interspeech.2023-1611",
  issn="1990-9772",
  url="https://www.isca-archive.org/interspeech_2023/mai23_interspeech.pdf"
}
Files
Back to top