TY - GEN
T1 - V2VFusion
T2 - 2023 China Automation Congress, CAC 2023
AU - Zhang, Lei
AU - Wang, Binglu
AU - Wang, Zhaozhong
AU - Zhao, Yongqiang
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Current vehicle-to-vehicle (V2V) research mainly centers on either LiDAR or camera-based perception. Yet, combining data from multiple sensors offers a more complete and precise understanding of the environment. This paper presents V2VFusion, a multimodal perception framework that fuses Li-DAR and camera sensor inputs to improve the performance of V2V systems. Firstly, we implement a baseline system for multi-modal fusion in V2V scenarios, effectively integrating data from LiDAR and camera sensors. This baseline provides a comparable benchmark for subsequent research. Secondly, we explore different fusion strategies, including concatenation, element-wise summation, and transformer methods, to investigate their impact on fusion performance. Lastly, we conduct experiments and evaluation on the OPV2V dataset. The experimental results demonstrate that the multimodal perception method achieves better performance and robustness in V2V tasks, providing more accurate object detection results, thereby improving the safety and reliability of autonomous driving systems.
AB - Current vehicle-to-vehicle (V2V) research mainly centers on either LiDAR or camera-based perception. Yet, combining data from multiple sensors offers a more complete and precise understanding of the environment. This paper presents V2VFusion, a multimodal perception framework that fuses Li-DAR and camera sensor inputs to improve the performance of V2V systems. Firstly, we implement a baseline system for multi-modal fusion in V2V scenarios, effectively integrating data from LiDAR and camera sensors. This baseline provides a comparable benchmark for subsequent research. Secondly, we explore different fusion strategies, including concatenation, element-wise summation, and transformer methods, to investigate their impact on fusion performance. Lastly, we conduct experiments and evaluation on the OPV2V dataset. The experimental results demonstrate that the multimodal perception method achieves better performance and robustness in V2V tasks, providing more accurate object detection results, thereby improving the safety and reliability of autonomous driving systems.
KW - autonomous driving
KW - cooperative perception
KW - multimodal fusion
KW - vehicle-to-vehicle
UR - http://www.scopus.com/inward/record.url?scp=85189364904&partnerID=8YFLogxK
U2 - 10.1109/CAC59555.2023.10450676
DO - 10.1109/CAC59555.2023.10450676
M3 - 会议稿件
AN - SCOPUS:85189364904
T3 - Proceedings - 2023 China Automation Congress, CAC 2023
SP - 3691
EP - 3696
BT - Proceedings - 2023 China Automation Congress, CAC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 17 November 2023 through 19 November 2023
ER -