Collaborative Multimodal Fusion Network for Multiagent Perception

Lei Zhang, Binglu Wang, Yongqiang Zhao, Yuan Yuan, Tianfei Zhou, Zhijun Li

Research output: Contribution to journalArticlepeer-review

Abstract

With the increasing popularity of autonomous driving systems and their applications in complex transportation scenarios, collaborative perception among multiple intelligent agents has become an important research direction. Existing single-agent multimodal fusion approaches are limited by their inability to leverage additional sensory data from nearby agents. In this article, we present the collaborative multimodal fusion network (CMMFNet) for distributed perception in multiagent systems. CMMFNet first extracts modality-specific features from LiDAR point clouds and camera images for each agent using dual-stream neural networks. To overcome the ambiguity in-depth prediction, we introduce a collaborative depth supervision module that projects dense fused point clouds onto image planes to generate more accurate depth ground truths. We then present modality-aware fusion strategies to aggregate homogeneous features across agents while preserving their distinctive properties. To align heterogeneous LiDAR and camera features, we introduce a modality consistency learning method. Finally, a transformer-based fusion module dynamically captures cross-modal correlations to produce a unified representation. Comprehensive evaluations on two extensive multiagent perception datasets, OPV2V and V2XSet, affirm the superiority of CMMFNet in detection performance, establishing a new benchmark in the field.

Original languageEnglish
JournalIEEE Transactions on Cybernetics
DOIs
StateAccepted/In press - 2024

Keywords

  • 3-D object detection
  • autonomous driving
  • collaborative perception
  • multiagent system
  • multimodal fusion

Fingerprint

Dive into the research topics of 'Collaborative Multimodal Fusion Network for Multiagent Perception'. Together they form a unique fingerprint.

Cite this