Cross-modal image fusion guided by subjective visual attention

Aiqing Fang, Xinbo Zhao, Yanning Zhang

科研成果: 期刊稿件文章同行评审

19 引用 (Scopus)

摘要

The human visual perception system has very strong robustness and contextual awareness in a variety of image processing tasks. This robustness and the perception ability of contextual awareness is closely related to the characteristics of multi-task auxiliary learning and subjective attention of the human visual perception system. In order to improve the robustness and contextual awareness of image fusion tasks, we proposed a multi-task auxiliary learning image fusion method guided by subjective attention. The image fusion method effectively unifies the subjective task intention and prior knowledge of human brain. In order to achieve our proposed image fusion method, we first analyze the mechanism of multi-task auxiliary learning, build a multi-task auxiliary learning network. Secondly, based on the human visual attention perception mechanism, we introduce the human visual attention network guided by subjective tasks on the basis of the multi-task auxiliary learning network. The subjective intention is introduced by the subjective attention task model, so that the network can fuse images according to the subjective intention. Finally, in order to verify the superiority of our image fusion method, we carried out experiments on the combined vision system image data set, and the infrared and visible image data set for experimental verification. The experimental results demonstrate the superiority of our fusion method over state-of-arts in contextual awareness and robustness.

源语言英语
页(从-至)333-345
页数13
期刊Neurocomputing
414
DOI
出版状态已出版 - 13 11月 2020

指纹

探究 'Cross-modal image fusion guided by subjective visual attention' 的科研主题。它们共同构成独一无二的指纹。

引用此