Learning Selective Mutual Attention and Contrast for RGB-D Saliency Detection

Nian Liu, Ni Zhang, Ling Shao, Junwei Han

Research output: Contribution to journalArticlepeer-review

81 Scopus citations

Abstract

How to effectively fuse cross-modal information is a key problem for RGB-D salient object detection. Early fusion and result fusion schemes fuse RGB and depth information at the input and output stages, respectively, and hence incur distribution gaps or information loss. Many models instead employ a feature fusion strategy, but they are limited by their use of low-order point-to-point fusion methods. In this paper, we propose a novel mutual attention model by fusing attention and context from different modalities. We use the non-local attention of one modality to propagate long-range contextual dependencies for the other, thus leveraging complementary attention cues to achieve high-order and trilinear cross-modal interaction. We also propose to induce contrast inference from the mutual attention and obtain a unified model. Considering that low-quality depth data may be detrimental to model performance, we further propose a selective attention to reweight the added depth cues. We embed the proposed modules in a two-stream CNN for RGB-D SOD. Experimental results demonstrate the effectiveness of our proposed model. Moreover, we also construct a new and challenging large-scale RGB-D SOD dataset of high-quality, which can promote both the training and evaluation of deep models.

Original languageEnglish
Pages (from-to)9026-9042
Number of pages17
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume44
Issue number12
DOIs
StatePublished - 1 Dec 2022

Keywords

  • RGB-D image
  • Salient object detection
  • attention model
  • contrast
  • non-local network

Fingerprint

Dive into the research topics of 'Learning Selective Mutual Attention and Contrast for RGB-D Saliency Detection'. Together they form a unique fingerprint.

Cite this