Publication Details
Deepfakes a lidé: dokážeme ještě rozlišit pravou řeč od umělé?
deepfake, synthetic voice, deepfake attacks, human factor
Deepfakes are unfortunately slowly but surely making headlines. They are gaining
this position largely due to their negative use in successful attacks. These
attacks (e.g., financial scams where the attacker impersonates someone else)
primarily target our ability to recognize (or rather not recognize) the deepfake
voice from the real one. To improve our ability to protect against these attacks,
it is first important to understand the human ability to distinguish a deepfake
recording from a genuine one and to determine what factors influence this
ability. Although some international research has already addressed this topic,
it neglects to take into account the way a real attack takes place. In our
research, we reflect this inaccuracy and re- alistically simulate a situation in
which an attack could take place. We thus differ significantly from existing
studies, but unfortunately the findings are not very favorable in terms of the
impact on people. This paper presents recent published attacks using deepfakes as
well as the findings of our research, which focuses on the human ability to
recognize deepfake recordings and how to improve this ability in the future and
prevent attacks using voice deepfakes.
@article{BUT186867,
author="Kamil {Malinka} and Anton {Firc} and Petr {Hanáček}",
title="Deepfakes a lidé: dokážeme ještě rozlišit pravou řeč od umělé?",
journal="DSM Data Security Management",
year="2023",
volume="2023",
number="04",
pages="22--26",
issn="1211-8737",
url="https://dsm.tate.cz/cs/2023/dsm-4-2023/deepfakes-a-lide-dokazeme-jeste-rozlisit-pravou-rec-od-umele"
}