摘要
In the face of complex scene images, the introduction of depth information can greatly improve the performance of salient object detection. However, up-sampling and down-sampling operations in neural networks maybe blur the boundaries of objects in the saliency map, thereby reducing the performance of salient object detection. Aiming at this problem, a boundary-driven cross-modal and cross-layer fusion network (BC2FNet) for RGB-D salient object detection is proposed in this paper, which preserves the boundary of the object by adding the guidance of boundary information to the cross-modal and cross-layer fusion, respectively. Firstly, a boundary generation module is designed to extract two kinds of boundary information from low-level features of RGB and depth modalities, respectively. Secondly, a boundary-driven feature selection module is designed, which is dedicated to simultaneously focusing on important feature information and preserving boundary details in the process of RGB and depth modality fusion. Finally, a boundary-driven cross-layer fusion module is proposed which simultaneously adds two kinds of boundary information in the process of up-sampling fusion on adjacent layers. By embedding this module into the top-down information fusion flow, the predicted saliency map can contain accurate objects and sharp boundaries. Simulation results on five standard RGB-D data sets show that the proposed model can achieve better performance.
| 投稿的翻译标题 | RGB⁃D salient object detection based on BC2 FNet network |
|---|---|
| 源语言 | 繁体中文 |
| 页(从-至) | 1135-1143 |
| 页数 | 9 |
| 期刊 | Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University |
| 卷 | 42 |
| 期 | 6 |
| DOI | |
| 出版状态 | 已出版 - 12月 2024 |
关键词
- boundary-driven
- cross-layer fusion
- cross-modal fusion
- salient object detection
指纹
探究 '基于 BC2 FNet 网络的 RGB⁃D 显著性目标检测' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver