Publication Details
BUT-FIT at SemEval-2020 Task 4: Multilingual commonsense
Fajčík Martin, Ing., Ph.D. (DCGM)
Dočekal Martin, Ing. (DCGM)
Smrž Pavel, doc. RNDr., Ph.D. (DCGM)
NLP, commonsense, pretrained language models, multilingual, machine translation
We participated in all three subtasks. In subtasks A and B, our submissions are
based on pretrained language representation models (namely ALBERT) and data
augmentation. We experimented with solving the task for another language, Czech,
by means of multilingual models and machine translated dataset, or translated
model inputs. We show that with a strong machine translation system, our system
can be used in another language with a small accuracy loss. In subtask C, our
submission, which is based on pretrained sequence-to-sequence model (BART),
ranked 1st in BLEU score ranking, however, we show that the correlation between
BLEU and human evaluation, in which our submission ended up 4th, is low. We
analyse the metrics used in the evaluation and we propose an additional score
based on model from subtask B, which correlates well with our manual ranking, as
well as reranking method based on the same principle. We performed an error and
dataset analysis for all subtasks and we present our findings.
@inproceedings{BUT168507,
author="Josef {Jon} and Martin {Fajčík} and Martin {Dočekal} and Pavel {Smrž}",
title="BUT-FIT at SemEval-2020 Task 4: Multilingual commonsense",
booktitle="Proceedings of the Fourteenth Workshop on Semantic Evaluation",
year="2020",
pages="374--390",
publisher="Association for Computational Linguistics",
address="Barcelona",
isbn="978-1-952148-31-6",
url="https://www.aclweb.org/anthology/2020.semeval-1.46/"
}