We present an automatic voice activity detection (VAD) method that is solely based on visual cues. Unlike traditional approaches processing audio, we show that upper body motion analysis is desirable for the VAD task. The proposed method consists of components for body motion representation, feature extraction from a Convolutional Neural Network (CNN) architecture and unsupervised domain adaptation. The body motion representations as images are used by the feature extraction component, which is generic and person-invariant, thus, can be applied to a subject who has never been seen. The endmost component handles the domain-shift problem, which appears due to the fact that the way people move/ gesticulate while speaking might vary from subject to subject, which results in disparate body motion features and consequently poorer VAD performance. The experimental analyses applied on a publicly available real-world VAD dataset show that the proposed method performs better than the state-of-the-art video-only and multimodal VAD approaches. Moreover, the proposed method has a better generalization ability as VAD results are more consistent across different subjects. As another major contribution, we present a new multimodal dataset (called RealVAD), created from a real-world (no role-plays) panel discussion. This dataset contains many actual situations/ challenges that are missing in the previous VAD datasets. We benchmarked the RealVAD dataset by applying the proposed method as well as cross-dataset analyses. Particularly, the results of cross-dataset experiments highlight the remarkable positive contribution of the unsupervised domain adaptation applied.

(2021). RealVAD: A Real-World Dataset and A Method for Voice Activity Detection by Body Motion Analysis [journal article - articolo]. In IEEE TRANSACTIONS ON MULTIMEDIA. Retrieved from https://hdl.handle.net/10446/260534

RealVAD: A Real-World Dataset and A Method for Voice Activity Detection by Body Motion Analysis

Beyan, Cigdem;
2021-01-01

Abstract

We present an automatic voice activity detection (VAD) method that is solely based on visual cues. Unlike traditional approaches processing audio, we show that upper body motion analysis is desirable for the VAD task. The proposed method consists of components for body motion representation, feature extraction from a Convolutional Neural Network (CNN) architecture and unsupervised domain adaptation. The body motion representations as images are used by the feature extraction component, which is generic and person-invariant, thus, can be applied to a subject who has never been seen. The endmost component handles the domain-shift problem, which appears due to the fact that the way people move/ gesticulate while speaking might vary from subject to subject, which results in disparate body motion features and consequently poorer VAD performance. The experimental analyses applied on a publicly available real-world VAD dataset show that the proposed method performs better than the state-of-the-art video-only and multimodal VAD approaches. Moreover, the proposed method has a better generalization ability as VAD results are more consistent across different subjects. As another major contribution, we present a new multimodal dataset (called RealVAD), created from a real-world (no role-plays) panel discussion. This dataset contains many actual situations/ challenges that are missing in the previous VAD datasets. We benchmarked the RealVAD dataset by applying the proposed method as well as cross-dataset analyses. Particularly, the results of cross-dataset experiments highlight the remarkable positive contribution of the unsupervised domain adaptation applied.
articolo
2021
Beyan, Cigdem; Shahid, Muhammad; Murino, Vittorio
(2021). RealVAD: A Real-World Dataset and A Method for Voice Activity Detection by Body Motion Analysis [journal article - articolo]. In IEEE TRANSACTIONS ON MULTIMEDIA. Retrieved from https://hdl.handle.net/10446/260534
File allegato/i alla scheda:
File Dimensione del file Formato  
IJ16_RealVAD A Real-world Dataset for Voice Activity Detection.pdf

Solo gestori di archivio

Versione: publisher's version - versione editoriale
Licenza: Licenza default Aisberg
Dimensione del file 2.72 MB
Formato Adobe PDF
2.72 MB Adobe PDF   Visualizza/Apri
Pubblicazioni consigliate

Aisberg ©2008 Servizi bibliotecari, Università degli studi di Bergamo | Terms of use/Condizioni di utilizzo

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10446/260534
Citazioni
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 7
social impact