Encoding brain network response to free viewing of videos

Junwei Han, Shijie Zhao, Xintao Hu, Lei Guo, Tianming Liu

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

A challenging goal for cognitive neuroscience researchers is to determine how mental representations are mapped onto the patterns of neural activity. To address this problem, functional magnetic resonance imaging (fMRI) researchers have developed a large number of encoding and decoding methods. However, previous studies typically used rather limited stimuli representation, like semantic labels and Wavelet Gabor filters, and largely focused on voxel-based brain patterns. Here, we present a new fMRI encoding model to predict the human brain’s responses to free viewing of video clips which aims to deal with this limitation. In this model, we represent the stimuli using a variety of representative visual features in the computer vision community, which can describe the global color distribution, local shape and spatial information and motion information contained in videos, and apply the functional connectivity to model the brain’s activity pattern evoked by these video clips. Our experimental results demonstrate that brain network responses during free viewing of videos can be robustly and accurately predicted across subjects by using visual features. Our study suggests the feasibility of exploring cognitive neuroscience studies by computational image/video analysis and provides a novel concept of using the brain encoding as a test-bed for evaluating visual feature extraction.

Original languageEnglish
Pages (from-to)389-397
Number of pages9
JournalCognitive Neurodynamics
Volume8
Issue number5
DOIs
StatePublished - Oct 2014

Keywords

  • Brain networks
  • Computer vision
  • Encoding
  • fMRI

Fingerprint

Dive into the research topics of 'Encoding brain network response to free viewing of videos'. Together they form a unique fingerprint.

Cite this