Publication Details
Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation
Pecher Branislav, Ing. (DCGM)
Šimko Jakub, doc. Ing., PhD. (DCGM)
SRBA, I.
Bieliková Mária, prof. Ing., Ph.D. (DCGM)
and others
large language models, data augmentation, lexical diversity, text augmentation,
crowdsourcing
The latest generative large language models (LLMs) have found their application
in data augmentation tasks, where small numbers of text samples are
LLM-paraphrased and then used to fine-tune downstream models. However, more
research is needed to assess how different prompts, seed data selection
strategies, filtering methods, or model settings affect the quality of
paraphrased data (and downstream models). In this study, we investigate three
text diversity incentive methods well established in crowdsourcing: taboo words,
hints by previous outlier solutions, and chaining on previous outlier solutions.
Using these incentive methods as part of instructions to LLMs augmenting text
datasets, we measure their effects on generated texts' lexical diversity and
downstream model performance. We compare the effects over 5 different LLMs, 6
datasets and 2 downstream models. We show that diversity is most increased by
taboo words, but downstream model performance is highest with hints.
@inproceedings{BUT193293,
author="ČEGIŇ, J. and PECHER, B. and ŠIMKO, J. and SRBA, I. and BIELIKOVÁ, M.",
title="Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation",
booktitle="Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year="2024",
pages="13148--13171",
publisher="Association for Computational Linguistics",
address="Bangkok",
doi="10.18653/v1/2024.acl-long.710",
isbn="979-8-8917-6094-3",
url="https://aclanthology.org/2024.acl-long.710/"
}