Sparsity-constrained fMRI decoding of visual saliency in naturalistic video streams

Xintao Hu, Cheng Lv, Gong Cheng, Jinglei Lv, Lei Guo, Junwei Han, Tianming Liu

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

Naturalistic stimuli such as video watching have been increasingly used in functional magnetic resonance imaging (fMRI)-based brain encoding and decoding studies since they can provide real and dynamic information that the human brain has to process in everyday life. In this paper, we propose a sparsity-constrained decoding model to explore whether bottom-up visual saliency in continuous video streams can be effectively decoded by brain activity recorded by fMRI, and to examine whether sparsity constraints can improve visual saliency decoding. Specifically, we use a biologically-plausible computational model to quantify the visual saliency in video streams, and adopt a sparse representation algorithm to learn the atomic fMRI signal dictionaries that are representative of the patterns of whole-brain fMRI signals. Sparse representation also links the learned atomic dictionary with the quantified video saliency. Experimental results show that the temporal visual saliency in video stream can be well decoded and the sparse constraints can improve the performance of fMRI decoding models.

Original languageEnglish
Article number7056490
Pages (from-to)65-75
Number of pages11
JournalIEEE Transactions on Autonomous Mental Development
Volume7
Issue number2
DOIs
StatePublished - 1 Jun 2015

Keywords

  • Functional magnetic resonance imaging (fMRI) decoding
  • naturalistic stimuli
  • sparsity constraints
  • visual saliency

Fingerprint

Dive into the research topics of 'Sparsity-constrained fMRI decoding of visual saliency in naturalistic video streams'. Together they form a unique fingerprint.

Cite this