We present the first publicly available annotations for the analy sis of face-touching behavior. These annotations are for a dataset composed of audio-visual recordings of small group social interactions with a total number of 64 videos, each one lasting between 12 to 30 minutes and showing a single person while participating to four-people meetings. They were performed by in total 16 anno tators with an almost perfect agreement (Cohen’s Kappa=0.89) on average. In total, 74K and 2M video frames were labelled as face touch and no-face-touch, respectively. Given the dataset and the collected annotations, we also present an extensive evaluation of several methods: rule-based, supervised learning with hand-crafted features and feature learning and inference with a Convolutional Neural Network (CNN) for Face-Touching detection. Our evaluation indicates that among all, CNN performed the best, reaching 83.76% F1-score and 0.84 Matthews Correlation Coefficient. To foster future research in this problem, code and dataset were made publicly available (github.com/IIT-PAVIS/Face-Touching-Behavior), providing all video frames, face-touch annotations, body pose estimations including face and hands key-points detection, face bounding boxes as well as the baseline methods implemented and the cross-validation splits used for training and evaluating our models.

(2020). Analysis of Face-Touching Behavior in Large Scale Social Interaction Dataset . Retrieved from https://hdl.handle.net/10446/260630

Analysis of Face-Touching Behavior in Large Scale Social Interaction Dataset

Beyan, Cigdem;
2020-01-01

Abstract

We present the first publicly available annotations for the analy sis of face-touching behavior. These annotations are for a dataset composed of audio-visual recordings of small group social interactions with a total number of 64 videos, each one lasting between 12 to 30 minutes and showing a single person while participating to four-people meetings. They were performed by in total 16 anno tators with an almost perfect agreement (Cohen’s Kappa=0.89) on average. In total, 74K and 2M video frames were labelled as face touch and no-face-touch, respectively. Given the dataset and the collected annotations, we also present an extensive evaluation of several methods: rule-based, supervised learning with hand-crafted features and feature learning and inference with a Convolutional Neural Network (CNN) for Face-Touching detection. Our evaluation indicates that among all, CNN performed the best, reaching 83.76% F1-score and 0.84 Matthews Correlation Coefficient. To foster future research in this problem, code and dataset were made publicly available (github.com/IIT-PAVIS/Face-Touching-Behavior), providing all video frames, face-touch annotations, body pose estimations including face and hands key-points detection, face bounding boxes as well as the baseline methods implemented and the cross-validation splits used for training and evaluating our models.
2020
Beyan, Cigdem; Bustreo, Matteo; Shahid, Muhammad; Bailo, Gian Luca; Carissimi, Nicolò; Del Bue, Alessio
File allegato/i alla scheda:
File Dimensione del file Formato  
IC19_Analysis+of+Face-Touching+Behavior (1).pdf

Solo gestori di archivio

Versione: publisher's version - versione editoriale
Licenza: Licenza default Aisberg
Dimensione del file 617.12 kB
Formato Adobe PDF
617.12 kB Adobe PDF   Visualizza/Apri
Pubblicazioni consigliate

Aisberg ©2008 Servizi bibliotecari, Università degli studi di Bergamo | Terms of use/Condizioni di utilizzo

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10446/260630
Citazioni
  • Scopus 9
  • ???jsp.display-item.citation.isi??? ND
social impact