Publication Details
Disinformation Capabilities of Large Language Models
PIKULIAK, M.
SRBA, I.
MÓRO, R.
MACKO, D.
Bieliková Mária, prof. Ing., Ph.D. (DCGM)
large language models, disinformation generation, human evaluation, fake news
detection
Automated disinformation generation is often listed as one of the risks of large
language models (LLMs). The theoretical ability to flood the information space
with disinformation content might have dramatic consequences for democratic
societies around the world. This paper presents a comprehensive study of the
disinformation capabilities of the current generation of LLMs to generate false
news articles in English language. In our study, we evaluated the capabilities of
10 LLMs using 20 disinformation narratives. We evaluated several aspects of the
LLMs: how well they are at generating news articles, how strongly they tend to
agree or disagree with the disinformation narratives, how often they generate
safety warnings, etc. We also evaluated the abilities of detection models to
detect these articles as LLM-generated. We conclude that LLMs are able to
generate convincing news articles that agree with dangerous disinformation
narratives.
@inproceedings{BUT193294,
author="VYKOPAL, I. and PIKULIAK, M. and SRBA, I. and MÓRO, R. and MACKO, D. and BIELIKOVÁ, M.",
title="Disinformation Capabilities of Large Language Models",
booktitle="Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year="2024",
pages="14830--14847",
publisher="Association for Computational Linguistics",
address="Bangkok",
doi="10.18653/v1/2024.acl-long.793",
isbn="979-8-8917-6094-3",
url="https://aclanthology.org/2024.acl-long.793"
}