Arousal recognition using audio-visual features and FMRI-based brain response

Junwei Han, Xiang Ji, Xintao Hu, Lei Guo, Tianming Liu

科研成果: 期刊稿件文章同行评审

34 引用 (Scopus)

摘要

As the indicator of emotion intensity, arousal is a significant clue for users to find their interested content. Hence, effective techniques for video arousal recognition are highly required. In this paper, we propose a novel framework for recognizing arousal levels by integrating low-level audio-visual features derived from video content and human brain's functional activity in response to videos measured by functional magnetic resonance imaging (fMRI). At first, a set of audio-visual features which have been demonstrated to be correlated with video arousal are extracted. Then, the fMRI-derived features that convey the brain activity of comprehending videos are extracted based on a number of brain regions of interests (ROIs) identified by a universal brain reference system. Finally, these two sets of features are integrated to learn a joint representation by using a multimodal deep Boltzmann machine (DBM). The learned joint representation can be utilized as the feature for training classifiers. Due to the fact that fMRI scanning is expensive and time-consuming, our DBM fusion model has the ability to predict the joint representation of the videos without fMRI scans. The experimental results on a video benchmark demonstrated the effectiveness of our framework and the superiority of integrated features.

源语言英语
文章编号7056522
页(从-至)337-347
页数11
期刊IEEE Transactions on Affective Computing
6
4
DOI
出版状态已出版 - 1 10月 2015

指纹

探究 'Arousal recognition using audio-visual features and FMRI-based brain response' 的科研主题。它们共同构成独一无二的指纹。

引用此