TY - GEN
T1 - Selective Visual Attention Revealed by Integrating FMRI and Eye-Tracking
AU - Qin, Yang
AU - Wang, Liting
AU - Guo, Lei
AU - Han, Junwei
AU - Hu, Xintao
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Selective visuo-spatial attention (SVSA), consisting of both bottom-up and top-down processes, prioritizes relevant visual information while filtering out the rest. The research interests in SVSA tend to shift from isolating the two processes to depicting the interplay between them. Existing studies have highlighted the combination of computational, behavioral, and naturalistic paradigm neuroimaging enables the study of SVSA in ecologically valid contexts. However, several critical issues need to be revisited. First, the computational visual saliency model that bridges external video stimuli and brain activities in previous studies are designed for static images rather than dynamic videos. Second, both the computational saliency maps and the eye-gaze heatmaps could be noisy. Third, the participant cohort is relatively small. In this study, we investigate the SVSA using the large-scale movie-watching functional magnetic resonance imaging (fMRI) data in the Human Connectome Project (HCP), and by integrating the potential solutions to the limitations discussed above. Our experimental results highlight the importance of visual and auditory interactions in forming the bottom-up visual attention, as well as the engagement of high-order visual cortices and the ventral frontoparietal network in the bottom-up modulatory effect in naturalistic conditions.
AB - Selective visuo-spatial attention (SVSA), consisting of both bottom-up and top-down processes, prioritizes relevant visual information while filtering out the rest. The research interests in SVSA tend to shift from isolating the two processes to depicting the interplay between them. Existing studies have highlighted the combination of computational, behavioral, and naturalistic paradigm neuroimaging enables the study of SVSA in ecologically valid contexts. However, several critical issues need to be revisited. First, the computational visual saliency model that bridges external video stimuli and brain activities in previous studies are designed for static images rather than dynamic videos. Second, both the computational saliency maps and the eye-gaze heatmaps could be noisy. Third, the participant cohort is relatively small. In this study, we investigate the SVSA using the large-scale movie-watching functional magnetic resonance imaging (fMRI) data in the Human Connectome Project (HCP), and by integrating the potential solutions to the limitations discussed above. Our experimental results highlight the importance of visual and auditory interactions in forming the bottom-up visual attention, as well as the engagement of high-order visual cortices and the ventral frontoparietal network in the bottom-up modulatory effect in naturalistic conditions.
KW - Computational visual saliency model
KW - Eye-tracking
KW - Functional MRI
KW - Visuo-spatial attention
UR - http://www.scopus.com/inward/record.url?scp=85172070911&partnerID=8YFLogxK
U2 - 10.1109/ISBI53787.2023.10230382
DO - 10.1109/ISBI53787.2023.10230382
M3 - 会议稿件
AN - SCOPUS:85172070911
T3 - Proceedings - International Symposium on Biomedical Imaging
BT - 2023 IEEE International Symposium on Biomedical Imaging, ISBI 2023
PB - IEEE Computer Society
T2 - 20th IEEE International Symposium on Biomedical Imaging, ISBI 2023
Y2 - 18 April 2023 through 21 April 2023
ER -