Publication Details

Study of Large Data Resources for Multilingual Training and System Porting

GRÉZL, F.; EGOROVA, E.; KARAFIÁT, M. Study of Large Data Resources for Multilingual Training and System Porting. In Procedia Computer Science. Procedia Computer Science. Yogyakarta: Elsevier Science, 2016. p. 15-22. ISSN: 1877-0509.
Czech title
Studie velkých datových zdrojů pro multilingvální trénování a portování systémů
Type
conference paper
Language
English
Authors
Grézl František, Ing., Ph.D. (DCGM)
Egorova Ekaterina, Ing., Ph.D.
Karafiát Martin, Ing., Ph.D. (DCGM)
URL
Keywords

Stacked Bottle-Neck; feature extraction; multilingual training; large data; Fisher database

Abstract

This study investigates the behavior of a feature extraction neural network model trained on a large amount of single language data ("source language") on a set of under-resourced target languages. The coverage of the source language acoustic space was changed in two ways: (1) by changing the amount of training data and (2) by altering the level of detail of acoustic units (by changing the triphone clustering). We observe the effect of these changes on the performance on target language in two scenarios: (1) the source-language NNs were used directly, (2) NNs were first ported to target language. The results show that increasing coverage as well as level of detail on the source language improves the target language system performance in both scenarios. For the first one, both source language characteristic have about the same effect. For the second scenario, the amount of data in source language is more important than the level of detail. The possibility to include large data into multilingual training set was also investigated. Our experiments point out possible risk of over-weighting the NNs towards the source language with large data. This degrades the performance on part of the target languages, compared to the setting where the amounts of data per language are balanced.

Annotation

This study investigates the behavior of a feature extraction neural network model trained on a large amount of single language data ("source language") on a set of under-resourced target languages. The coverage of the source language acoustic space was changed in two ways: (1) by changing the amount of training data and (2) by altering the level of detail of acoustic units (by changing the triphone clustering). We observe the effect of these changes on the performance on target language in two scenarios: (1) the source-language NNs were used directly, (2) NNs were first ported to target language. The results show that increasing coverage as well as level of detail on the source language improves the target language system performance in both scenarios. For the first one, both source language characteristic have about the same effect. For the second scenario, the amount of data in source language is more important than the level of detail. The possibility to include large data into multilingual training set was also investigated. Our experiments point out possible risk of over-weighting the NNs towards the source language with large data. This degrades the performance on part of the target languages, compared to the setting where the amounts of data per language are balanced.

Published
2016
Pages
15–22
Journal
Procedia Computer Science, vol. 2016, no. 81, ISSN 1877-0509
Proceedings
Procedia Computer Science
Publisher
Elsevier Science
Place
Yogyakarta
DOI
UT WoS
000387446500002
EID Scopus
BibTeX
@inproceedings{BUT130953,
  author="František {Grézl} and Ekaterina {Egorova} and Martin {Karafiát}",
  title="Study of Large Data Resources for Multilingual Training and System Porting",
  booktitle="Procedia Computer Science",
  year="2016",
  journal="Procedia Computer Science",
  volume="2016",
  number="81",
  pages="15--22",
  publisher="Elsevier Science",
  address="Yogyakarta",
  doi="10.1016/j.procs.2016.04.024",
  issn="1877-0509",
  url="http://www.sciencedirect.com/science/article/pii/S1877050916300382"
}
Files
Back to top