Arousal recognition using audio-visual features and FMRI-based brain response

Junwei Han, Xiang Ji, Xintao Hu, Lei Guo, Tianming Liu

Research output: Contribution to journalArticlepeer-review

34 Scopus citations

Abstract

As the indicator of emotion intensity, arousal is a significant clue for users to find their interested content. Hence, effective techniques for video arousal recognition are highly required. In this paper, we propose a novel framework for recognizing arousal levels by integrating low-level audio-visual features derived from video content and human brain's functional activity in response to videos measured by functional magnetic resonance imaging (fMRI). At first, a set of audio-visual features which have been demonstrated to be correlated with video arousal are extracted. Then, the fMRI-derived features that convey the brain activity of comprehending videos are extracted based on a number of brain regions of interests (ROIs) identified by a universal brain reference system. Finally, these two sets of features are integrated to learn a joint representation by using a multimodal deep Boltzmann machine (DBM). The learned joint representation can be utilized as the feature for training classifiers. Due to the fact that fMRI scanning is expensive and time-consuming, our DBM fusion model has the ability to predict the joint representation of the videos without fMRI scans. The experimental results on a video benchmark demonstrated the effectiveness of our framework and the superiority of integrated features.

Original languageEnglish
Article number7056522
Pages (from-to)337-347
Number of pages11
JournalIEEE Transactions on Affective Computing
Volume6
Issue number4
DOIs
StatePublished - 1 Oct 2015

Keywords

  • Affective computing
  • Arousal recognition
  • FMRI-derived features
  • Multimodal DBM

Fingerprint

Dive into the research topics of 'Arousal recognition using audio-visual features and FMRI-based brain response'. Together they form a unique fingerprint.

Cite this