A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Multimodal Deception Detection Using Automatically Extracted Acoustic, Visual, and Lexical Features
2020
Interspeech 2020
Deception detection in conversational dialogue has attracted much attention in recent years. Yet existing methods for this rely heavily on human-labeled annotations that are costly and potentially inaccurate. In this work, we present an automated system that utilizes multimodal features for conversational deception detection, without the use of human annotations. We study the predictive power of different modalities and combine them for better performance. We use openSMILE to extract acoustic
doi:10.21437/interspeech.2020-2320
dblp:conf/interspeech/ZhangLH20
fatcat:qvalb6wfcbfuhoszow4mhhsst4