TY - GEN
T1 - A Novel LiDAR-Camera Fusion Method for Enhanced Odometry
AU - Dong, Hao
AU - Xun, Yijie
AU - He, Yuchao
AU - Liu, Jiajia
AU - Mao, Bomin
AU - Guo, Hongzhi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The development of Autonomous Vehicles (AVs) provides users with high-quality services and convenient travel experiences. As one of the most important functions in the automotive field, mobile positioning has attracted widespread attention from scholars. However, using a single-modal sensor (LiDAR or camera) poses challenges for precise localization due to their measurement flaws. Therefore, some scholars have proposed Visual-LiDAR Odometry (VLO). Nevertheless, most of the existing VLO solely use a single-modal sensor as their main framework and utilize another sensor for optimization, which does not fully leverage the complementary behavior of sensors in different environments. Thus, this paper presents a novel LiDAR-camera fusion method for improving the odometry estimation. Firstly, we employ a depth completion network to convert the image into pseudo-LiDAR to compensate for the missing depth values in the LiDAR point clouds. Then, we adopt Bayesian inference to enhance the robustness of the fusion method in different environments. Finally, evaluations on the public KITTI odometry show that the proposed method outperforms several state-of-the-art methods.
AB - The development of Autonomous Vehicles (AVs) provides users with high-quality services and convenient travel experiences. As one of the most important functions in the automotive field, mobile positioning has attracted widespread attention from scholars. However, using a single-modal sensor (LiDAR or camera) poses challenges for precise localization due to their measurement flaws. Therefore, some scholars have proposed Visual-LiDAR Odometry (VLO). Nevertheless, most of the existing VLO solely use a single-modal sensor as their main framework and utilize another sensor for optimization, which does not fully leverage the complementary behavior of sensors in different environments. Thus, this paper presents a novel LiDAR-camera fusion method for improving the odometry estimation. Firstly, we employ a depth completion network to convert the image into pseudo-LiDAR to compensate for the missing depth values in the LiDAR point clouds. Then, we adopt Bayesian inference to enhance the robustness of the fusion method in different environments. Finally, evaluations on the public KITTI odometry show that the proposed method outperforms several state-of-the-art methods.
UR - http://www.scopus.com/inward/record.url?scp=105000829100&partnerID=8YFLogxK
U2 - 10.1109/GLOBECOM52923.2024.10901103
DO - 10.1109/GLOBECOM52923.2024.10901103
M3 - 会议稿件
AN - SCOPUS:105000829100
T3 - Proceedings - IEEE Global Communications Conference, GLOBECOM
SP - 3637
EP - 3642
BT - GLOBECOM 2024 - 2024 IEEE Global Communications Conference
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Global Communications Conference, GLOBECOM 2024
Y2 - 8 December 2024 through 12 December 2024
ER -