Deepfake Warnings for Political Videos Increase Disbelief but Do Not Improve Discernment: Evidence from Two Experiments [post]

John Ternovski, Joshua Kalla, Peter Michael Aronow
2021 unpublished
Recent advances in machine learning have led to the development of the "deepfake," a convincingly realistic, computer-generated video of a public figure saying something they have not actually said. Policymakers have expressed concern that deepfakes could mislead voters and affect election outcomes, but existing research has found minimal persuasive effects. In this paper, we explore a downstream consequence of deepfakes: if voters are repeatedly warned of the existence and dangers of
more » ... they may simply begin to distrust all political video footage – whether real or fake. Through two online survey experiments, we found that voters were unable to discriminate between a real video and a deepfake. Statements warning about the existence of deepfakes did not enhance participants' ability to successfully spot manipulated video content. Instead, these warnings consistently induced participants to believe that the videos they watched were fake, even when the videos were real. The warnings were not specific to the video participants were watching; simply stating that deepfakes exist increased distrust of any accompanying video. Our findings suggest that even if deepfakes are not themselves persuasive, rhetoric about deepfakes can nevertheless be weaponized by politicians and campaigns to dismiss and disown real videos.
doi:10.31219/osf.io/dta97 fatcat:ts5e5t2oxja3bok6kld5zphlby