Revisiting Feature Fusion for RGB-T Salient Object Detection

Qiang Zhang, Tonglin Xiao, Nianchang Huang, Dingwen Zhang, Jungong Han

科研成果: 期刊稿件文章同行评审

115 引用 (Scopus)

摘要

While many RGB-based saliency detection algorithms have recently shown the capability of segmenting salient objects from an image, they still suffer from unsatisfactory performance when dealing with complex scenarios, insufficient illumination or occluded appearances. To overcome this problem, this article studies RGB-T saliency detection, where we take advantage of thermal modality's robustness against illumination and occlusion. To achieve this goal, we revisit feature fusion for mining intrinsic RGB-T saliency patterns and propose a novel deep feature fusion network, which consists of the multi-scale, multi-modality, and multi-level feature fusion modules. Specifically, the multi-scale feature fusion module captures rich contexture features from each modality feature, while the multi-modality and multi-level feature fusion modules integrate complementary features from different modality features and different level of features, respectively. To demonstrate the effectiveness of the proposed approach, we conduct comprehensive experiments on the RGB-T saliency detection benchmark. The experimental results demonstrate that our approach outperforms other state-of-the-art methods and the conventional feature fusion modules by a large margin.

源语言英语
文章编号9161021
页(从-至)1804-1818
页数15
期刊IEEE Transactions on Circuits and Systems for Video Technology
31
5
DOI
出版状态已出版 - 5月 2021
已对外发布

指纹

探究 'Revisiting Feature Fusion for RGB-T Salient Object Detection' 的科研主题。它们共同构成独一无二的指纹。

引用此