跳到主要导航 跳到搜索 跳到主要内容

Learning Cross-modality Interaction for Robust Depth Perception of Autonomous Driving

  • Yunji Liang
  • , Nengzhen Chen
  • , Zhiwen Yu
  • , Lei Tang
  • , Hongkai Yu
  • , Bin Guo
  • , Daniel Dajun Zeng

科研成果: 期刊稿件文章同行评审

3 引用 (Scopus)

摘要

As one of the fundamental tasks of autonomous driving, depth perception aims to perceive physical objects in three dimensions and to judge their distances away from the ego vehicle. Although great efforts have been made for depth perception, LiDAR-based and camera-based solutions have limitations with low accuracy and poor robustness for noise input. With the integration of monocular cameras and LiDAR sensors in autonomous vehicles, in this article, we introduce a two-stream architecture to learn the modality interaction representation under the guidance of an image reconstruction task to compensate for the deficiencies of each modality in a parallel manner. Specifically, in the two-stream architecture, the multi-scale cross-modality interactions are preserved via a cascading interaction network under the guidance of the reconstruction task. Next, the shared representation of modality interaction is integrated to infer the dense depth map due to the complementarity and heterogeneity of the two modalities. We evaluated the proposed solution on the KITTI dataset and CALAR synthetic dataset. Our experimental results show that learning the coupled interaction of modalities under the guidance of an auxiliary task can lead to significant performance improvements. Furthermore, our approach is competitive against the state-of-the-art models and robust against the noisy input. The source code is available at https://github.com/tonyFengye/Code/tree/master.

源语言英语
文章编号48
期刊ACM Transactions on Intelligent Systems and Technology
15
3
DOI
出版状态已出版 - 15 4月 2024

指纹

探究 'Learning Cross-modality Interaction for Robust Depth Perception of Autonomous Driving' 的科研主题。它们共同构成独一无二的指纹。

引用此