Recently, automatic deception detection has gained momentum thanks to advances in computer vision, computational linguistics and machine learning research fields. The majority of the work in this area focused on written deception and analysis of verbal features. However, according to psychology, people display various nonverbal behavioral cues, in addition to verbal ones, while lying. Therefore, it is important to utilize additional modalities such as video and audio to detect deception accurately.When multi-modal data was used for deception detection, previous studies concatenated all verbal and nonverbal features into a single vector. This concatenation might not be meaningful, because different feature groups can have different statistical properties, leading to lower classification accuracy. Following this intuition, we apply, for the first time in deception detection, a multi-view learning (MVL) approach, where each view corresponds to a feature group. This results in improved classification results over the state of the art methods. Additionally, we show that the optimized parameters of the MVL algorithm can give insights into the contribution of each feature group to the final results, thus revealing the importance of each feature and eliminating the need of performing feature selection as well. Finally, we focus on analyzing face-based low level, not hand crafted features, which are extracted using various pre-trained Deep Neural Networks (DNNs), showing that face is the most important nonverbal cue for the detection of deception.
(2018). A multi-view learning approach to deception detection . Retrieved from https://hdl.handle.net/10446/260636
A multi-view learning approach to deception detection
Beyan, Cigdem;
2018-01-01
Abstract
Recently, automatic deception detection has gained momentum thanks to advances in computer vision, computational linguistics and machine learning research fields. The majority of the work in this area focused on written deception and analysis of verbal features. However, according to psychology, people display various nonverbal behavioral cues, in addition to verbal ones, while lying. Therefore, it is important to utilize additional modalities such as video and audio to detect deception accurately.When multi-modal data was used for deception detection, previous studies concatenated all verbal and nonverbal features into a single vector. This concatenation might not be meaningful, because different feature groups can have different statistical properties, leading to lower classification accuracy. Following this intuition, we apply, for the first time in deception detection, a multi-view learning (MVL) approach, where each view corresponds to a feature group. This results in improved classification results over the state of the art methods. Additionally, we show that the optimized parameters of the MVL algorithm can give insights into the contribution of each feature group to the final results, thus revealing the importance of each feature and eliminating the need of performing feature selection as well. Finally, we focus on analyzing face-based low level, not hand crafted features, which are extracted using various pre-trained Deep Neural Networks (DNNs), showing that face is the most important nonverbal cue for the detection of deception.File | Dimensione del file | Formato | |
---|---|---|---|
IC13_A Multi-View Learning Approach To Deception Detection.pdf
Solo gestori di archivio
Versione:
publisher's version - versione editoriale
Licenza:
Licenza default Aisberg
Dimensione del file
226.25 kB
Formato
Adobe PDF
|
226.25 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
Aisberg ©2008 Servizi bibliotecari, Università degli studi di Bergamo | Terms of use/Condizioni di utilizzo