MA-Stereo: Real-Time Stereo Matching via Multi-Scale Attention Fusion and Spatial Error-Aware Refinement

Wei Gao, Yongjie Cai, Youssef Akoudad, Yang Yang, Jie Chen

Research output: Contribution to journalArticlepeer-review

Abstract

Stereo matching is a fundamental task in computer vision. Real-time stereo matching has recently shown great potential in robotics and autonomous driving applications. However, the existing cost aggregation in real-time stereo matching suffers from accuracy limitations in ill-posed regions. Furthermore, most real-time stereo matching methods struggle to predict disparity in object details and edge areas, resulting in relatively blurred and lacking detailed disparity maps. To address these issues, we propose a real-time stereo matching architecture called MA-Stereo, which features a multi-scale attention fusion (MAF) module and an attention-based spatial error-aware refinement (ASER) module. The MAF adaptively fuses context and geometry information through attention mechanism, effectively improving cost aggregation. In addition, the ASER refines the predicted disparity map, fully leveraging high-frequency information and spatial evidence to accurately predict disparities for sharp edges and thin structures. Experimental results on the SceneFlow and KITTI benchmarks demonstrate that MA-Stereo outperforms almost all current state-of-the-art real-time stereo matching methods while maintaining relatively low runtime, achieving a favorable trade-off between accuracy and speed.

Original languageEnglish
Pages (from-to)9954-9961
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume9
Issue number11
DOIs
StatePublished - 2024

Keywords

  • autonomous vehicle navigation
  • Deep learning for visual perception
  • real-time stereo matching

Fingerprint

Dive into the research topics of 'MA-Stereo: Real-Time Stereo Matching via Multi-Scale Attention Fusion and Spatial Error-Aware Refinement'. Together they form a unique fingerprint.

Cite this