Ing.

Kateřina Žmolíková

Ph.D.

External Pedagogue


izmolikova@fit.vut.cz
143646/BUT personal ID

Publications

  • 2023

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; OCHIAI, T.; ČERNOCKÝ, J.; KINOSHITA, K.; YU, D. Neural Target Speech Extraction: An overview. IEEE SIGNAL PROCESSING MAGAZINE, 2023, vol. 40, no. 3, p. 8-29. ISSN: 1558-0792. Detail

  • 2022

    DE BENITO GORRON, D.; ŽMOLÍKOVÁ, K.; TORRE TOLEDANO, D. Source Separation for Sound Event Detection in domestic environments using jointly trained models. In Proceedings of The 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022). Bamberg: IEEE Signal Processing Society, 2022. p. 1-5. ISBN: 978-1-6654-6867-1. Detail

    DELCROIX, M.; KINOSHITA, K.; OCHIAI, T.; ŽMOLÍKOVÁ, K.; SATO, H.; NAKATANI, T. Listen only to me! How well can target speech extraction handle false alarms?. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. p. 216-220. ISSN: 1990-9772. Detail

    KOCOUR, M.; ŽMOLÍKOVÁ, K.; ONDEL YANG, L.; ŠVEC, J.; DELCROIX, M.; OCHIAI, T.; BURGET, L.; ČERNOCKÝ, J. Revisiting joint decoding based multi-talker speech recognition with DNN acoustic model. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. p. 4955-4959. ISSN: 1990-9772. Detail

    ŠVEC, J.; ŽMOLÍKOVÁ, K.; KOCOUR, M.; DELCROIX, M.; OCHIAI, T.; MOŠNER, L.; ČERNOCKÝ, J. Analysis of impact of emotions on target speech extraction and speech separation. In Proceedings of The 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022). Bamberg: IEEE Signal Processing Society, 2022. p. 1-5. ISBN: 978-1-6654-6867-1. Detail

  • 2021

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; OCHIAI, T.; KINOSHITA, K.; NAKATANI, T. Speaker activity driven neural speech extraction. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Toronto: IEEE Signal Processing Society, 2021. p. 6099-6103. ISBN: 978-1-7281-7605-5. Detail

    LANDINI, F.; LOZANO DÍEZ, A.; BURGET, L.; DIEZ SÁNCHEZ, M.; SILNOVA, A.; ŽMOLÍKOVÁ, K.; GLEMBEK, O.; MATĚJKA, P.; STAFYLAKIS, T.; BRUMMER, J. BUT System Description for The Third DIHARD Speech Diarization Challenge. Proceedings available at Dihard Challenge Github. on-line by LDC and University of Pennsylvania: 2021. p. 1-5. Detail

    VYDANA, H.; KARAFIÁT, M.; ŽMOLÍKOVÁ, K.; BURGET, L.; ČERNOCKÝ, J. Jointly Trained Transformers Models for Spoken Language Translation. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Ontario: IEEE Signal Processing Society, 2021. p. 7513-7517. ISBN: 978-1-7281-7605-5. Detail

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; BURGET, L.; NAKATANI, T.; ČERNOCKÝ, J. Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation. In 2021 IEEE Spoken Language Technology Workshop, SLT 2021 - Proceedings. Shenzhen - virtual: IEEE Signal Processing Society, 2021. p. 889-896. ISBN: 978-1-7281-7066-4. Detail

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; RAJ, D.; WATANABE, S.; ČERNOCKÝ, J. Auxiliary Loss Function for Target Speech Extraction and Recognition with Weak Supervision Based on Speaker Characteristics. In Proceedings of 2021 Interspeech. Proceedings of Interspeech. Brno: International Speech Communication Association, 2021. p. 1464-1468. ISSN: 1990-9772. Detail

  • 2020

    DELCROIX, M.; OCHIAI, T.; ŽMOLÍKOVÁ, K.; KINOSHITA, K.; TAWARA, N.; NAKATANI, T.; ARAKI, S. Improving Speaker Discrimination of Target Speech Extraction With Time-Domain Speakerbeam. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Barcelona: IEEE Signal Processing Society, 2020. p. 691-695. ISBN: 978-1-5090-6631-5. Detail

    LANDINI, F.; WANG, S.; DIEZ SÁNCHEZ, M.; BURGET, L.; MATĚJKA, P.; ŽMOLÍKOVÁ, K.; MOŠNER, L.; SILNOVA, A.; PLCHOT, O.; NOVOTNÝ, O.; ZEINALI, H.; ROHDIN, J. But System for the Second Dihard Speech Diarization Challenge. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Barcelona: IEEE Signal Processing Society, 2020. p. 6529-6533. ISBN: 978-1-5090-6631-5. Detail

    ŽMOLÍKOVÁ, K.; KOCOUR, M.; LANDINI, F.; BENEŠ, K.; KARAFIÁT, M.; VYDANA, H.; LOZANO DÍEZ, A.; PLCHOT, O.; BASKAR, M.; ŠVEC, J.; MOŠNER, L.; MALENOVSKÝ, V.; BURGET, L.; YUSUF, B.; NOVOTNÝ, O.; GRÉZL, F.; SZŐKE, I.; ČERNOCKÝ, J. BUT System for CHiME-6 Challenge. Proceedings of CHiME 2020 Virtual Workshop. Barcelona: University of Sheffield, 2020. p. 1-3. Detail

  • 2019

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; OCHIAI, T.; KINOSHITA, K.; ARAKI, S.; NAKATANI, T. Compact Network for Speakerbeam Target Speaker Extraction. In Proceedings of ICASSP. Brighton: IEEE Signal Processing Society, 2019. p. 6965-6969. ISBN: 978-1-5386-4658-8. Detail

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; OCHIAI, T.; KINOSHITA, K.; ARAKI, S.; NAKATANI, T. Evaluation of SpeakerBeam target speech extraction in real noisy and reverberant conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF JAPAN, 2019, vol. 2019, no. 2, p. 1-2. ISSN: 0369-4232. Detail

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; KINOSHITA, K.; OCHIAI, T.; NAKATANI, T.; BURGET, L.; ČERNOCKÝ, J. SpeakerBeam: Speaker Aware Neural Network for Target Speaker Extraction in Speech Mixtures. IEEE J-STSP, 2019, vol. 13, no. 4, p. 800-814. ISSN: 1932-4553. Detail

  • 2018

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; KINOSHITA, K.; ARAKI, S.; OGAWA, A.; NAKATANI, T. SpeakerBeam: A New Deep Learning Technology for Extracting Speech of a Target Speaker Based on the Speaker's Voice Characteristics. NTT Technical Review, 2018, vol. 16, no. 11, p. 19-24. ISSN: 1348-3447. Detail

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; KINOSHITA, K.; OGAWA, A.; NAKATANI, T. Single Channel Target Speaker Extraction and Recognition with Speaker Beam. In Proceedings of ICASSP 2018. Calgary: IEEE Signal Processing Society, 2018. p. 5554-5558. ISBN: 978-1-5386-4658-8. Detail

    DIEZ SÁNCHEZ, M.; LANDINI, F.; BURGET, L.; ROHDIN, J.; SILNOVA, A.; ŽMOLÍKOVÁ, K.; NOVOTNÝ, O.; VESELÝ, K.; GLEMBEK, O.; PLCHOT, O.; MOŠNER, L.; MATĚJKA, P. BUT system for DIHARD Speech Diarization Challenge 2018. In Proceedings of Interspeech 2018. Proceedings of Interspeech. Hyderabad: International Speech Communication Association, 2018. p. 2798-2802. ISSN: 1990-9772. Detail

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; KINOSHITA, K.; HIGUCHI, T.; NAKATANI, T.; ČERNOCKÝ, J. Optimization of Speaker-aware Multichannel Speech Extraction with ASR Criterion. In Proceedings of ICASSP 2018. Calgary: IEEE Signal Processing Society, 2018. p. 6702-6706. ISBN: 978-1-5386-4658-8. Detail

  • 2017

    HIGUCHI, T.; KINOSHITA, K.; DELCROIX, M.; ŽMOLÍKOVÁ, K.; NAKATANI, T. Deep clustering-based beamforming for separation with unknown number of sources. In Proceedings of Interspeech 2017. Proceedings of Interspeech. Stockholm: International Speech Communication Association, 2017. p. 1183-1187. ISSN: 1990-9772. Detail

    KARAFIÁT, M.; VESELÝ, K.; ŽMOLÍKOVÁ, K.; DELCROIX, M.; WATANABE, S.; BURGET, L.; ČERNOCKÝ, J.; SZŐKE, I. Training Data Augmentation and Data Selection. In New Era for Robust Speech Recognition: Exploiting Deep Learning. Computer Science, Artificial Intelligence. Heidelberg: Springer International Publishing, 2017. p. 245-260. ISBN: 978-3-319-64679-4. Detail

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; KINOSHITA, K.; HIGUCHI, T.; OGAWA, A.; NAKATANI, T. Learning Speaker Representation for Neural Network Based Multichannel Speaker Extraction. In Proceedings of ASRU 2017. Okinawa: IEEE Signal Processing Society, 2017. p. 8-15. ISBN: 978-1-5090-4788-8. Detail

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; KINOSHITA, K.; HIGUCHI, T.; OGAWA, A.; NAKATANI, T. Speaker-aware neural network based beamformer for speaker extraction in speech mixtures. In Proceedings of Interspeech 2017. Proceedings of Interspeech. Stocholm: International Speech Communication Association, 2017. p. 2655-2659. ISSN: 1990-9772. Detail

  • 2016

    VESELÝ, K.; WATANABE, S.; ŽMOLÍKOVÁ, K.; KARAFIÁT, M.; BURGET, L.; ČERNOCKÝ, J. Sequence Summarizing Neural Network for Speaker Adaptation. In Proceedings of the 41th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), 2016. Shanghai: IEEE Signal Processing Society, 2016. p. 5315-5319. ISBN: 978-1-4799-9988-0. Detail

    ŽMOLÍKOVÁ, K.; KARAFIÁT, M.; VESELÝ, K.; DELCROIX, M.; WATANABE, S.; BURGET, L.; ČERNOCKÝ, J. Data selection by sequence summarizing neural network in mismatch condition training. In Proceedings of Interspeech 2016. San Francisco: International Speech Communication Association, 2016. p. 2354-2358. ISBN: 978-1-5108-3313-5. Detail

Back to top