AGDF-Net: Learning Domain Generalizable Depth Features With Adaptive Guidance Fusion

Lina Liu, Xibin Song, Mengmeng Wang, Yuchao Dai, Yong Liu, Liangjun Zhang

科研成果: 期刊稿件文章同行评审

4 引用 (Scopus)

摘要

Cross-domain generalizable depth estimation aims to estimate the depth of target domains (i.e., real-world) using models trained on the source domains (i.e., synthetic). Previous methods mainly use additional real-world domain datasets to extract depth specific information for cross-domain generalizable depth estimation. Unfortunately, due to the large domain gap, adequate depth specific information is hard to obtain and interference is difficult to remove, which limits the performance. To relieve these problems, we propose a domain generalizable feature extraction network with adaptive guidance fusion (AGDF-Net) to fully acquire essential features for depth estimation at multi-scale feature levels. Specifically, our AGDF-Net first separates the image into initial depth and weak-related depth components with reconstruction and contrary losses. Subsequently, an adaptive guidance fusion module is designed to sufficiently intensify the initial depth features for domain generalizable intensified depth features acquisition. Finally, taking intensified depth features as input, an arbitrary depth estimation network can be used for real-world depth estimation. Using only synthetic datasets, our AGDF-Net can be applied to various real-world datasets (i.e., KITTI, NYUDv2, NuScenes, DrivingStereo and CityScapes) with state-of-the-art performances. Furthermore, experiments with a small amount of real-world data in a semi-supervised setting also demonstrate the superiority of AGDF-Net over state-of-the-art approaches.

源语言英语
文章编号10356721
页(从-至)3137-3155
页数19
期刊IEEE Transactions on Pattern Analysis and Machine Intelligence
46
5
DOI
出版状态已出版 - 1 5月 2024

指纹

探究 'AGDF-Net: Learning Domain Generalizable Depth Features With Adaptive Guidance Fusion' 的科研主题。它们共同构成独一无二的指纹。

引用此