跳到主要导航 跳到搜索 跳到主要内容

DEEP FEATURE SELECTION-AND-FUSION FOR RGB-D SEMANTIC SEGMENTATION

科研成果: 书/报告/会议事项章节会议稿件同行评审

22 引用 (Scopus)

摘要

Scene depth information can help visual information for more accurate semantic segmentation. However, how to effectively integrate multi-modality information into representative features is still an open problem. Most of the existing work uses DCNNs to implicitly fuse multi-modality information. But as the network deepens, some critical distinguishing features may be lost, which reduces the segmentation performance. This work proposes a unified and efficient feature selection- and-fusion network (FSFNet), which contains a symmetric cross-modality residual fusion module used for explicit fusion of multi-modality information. Besides, the network includes a detailed feature propagation module, which is used to maintain low-level detailed information during the forward process of the network. Compared with the state-of-the-art methods, experimental evaluations demonstrate that the proposed model achieves competitive performance on two public datasets.

源语言英语
主期刊名2021 IEEE International Conference on Multimedia and Expo, ICME 2021
出版商IEEE Computer Society
ISBN(电子版)9781665438643
DOI
出版状态已出版 - 2021
活动2021 IEEE International Conference on Multimedia and Expo, ICME 2021 - Shenzhen, 中国
期限: 5 7月 20219 7月 2021

出版系列

姓名Proceedings - IEEE International Conference on Multimedia and Expo
ISSN(印刷版)1945-7871
ISSN(电子版)1945-788X

会议

会议2021 IEEE International Conference on Multimedia and Expo, ICME 2021
国家/地区中国
Shenzhen
时期5/07/219/07/21

指纹

探究 'DEEP FEATURE SELECTION-AND-FUSION FOR RGB-D SEMANTIC SEGMENTATION' 的科研主题。它们共同构成独一无二的指纹。

引用此