Sparsity-constrained fMRI decoding of visual saliency in naturalistic video streams

Xintao Hu, Cheng Lv, Gong Cheng, Jinglei Lv, Lei Guo, Junwei Han, Tianming Liu

科研成果: 期刊稿件文章同行评审

20 引用 (Scopus)

摘要

Naturalistic stimuli such as video watching have been increasingly used in functional magnetic resonance imaging (fMRI)-based brain encoding and decoding studies since they can provide real and dynamic information that the human brain has to process in everyday life. In this paper, we propose a sparsity-constrained decoding model to explore whether bottom-up visual saliency in continuous video streams can be effectively decoded by brain activity recorded by fMRI, and to examine whether sparsity constraints can improve visual saliency decoding. Specifically, we use a biologically-plausible computational model to quantify the visual saliency in video streams, and adopt a sparse representation algorithm to learn the atomic fMRI signal dictionaries that are representative of the patterns of whole-brain fMRI signals. Sparse representation also links the learned atomic dictionary with the quantified video saliency. Experimental results show that the temporal visual saliency in video stream can be well decoded and the sparse constraints can improve the performance of fMRI decoding models.

源语言英语
文章编号7056490
页(从-至)65-75
页数11
期刊IEEE Transactions on Autonomous Mental Development
7
2
DOI
出版状态已出版 - 1 6月 2015

指纹

探究 'Sparsity-constrained fMRI decoding of visual saliency in naturalistic video streams' 的科研主题。它们共同构成独一无二的指纹。

引用此