TY - JOUR
T1 - Online Targetless Camera-Lidar Extrinsic Calibration Using Object-oriented Semantic Feature
AU - Ji, Xinchun
AU - He, Zheng
AU - Gu, Shuhang
AU - Wei, Dongyan
AU - Zhou, Deyun
N1 - Publisher Copyright:
© 2026 IEEE.
PY - 2026
Y1 - 2026
N2 - Accurate extrinsic parameters are critical for fusing heterogeneous data from cameras and Lidars. To solve the problems of incomplete feature extraction, poor scene adaptability and unstable parameter convergence in conventional automatic calibration methods, this paper proposed an online extrinsic calibration method using object-oriented semantic features, which consisted of a coarse-calibration based on trajectory-consistency and a fine-calibration based on feature-consistency. With the initial parameters from the coarse-calibration, fine-calibration pictorialized the sparse point cloud to obtain its depth, intensity and class attributes simultaneously. Then, the SAM network was used to process both the visual image and the point cloud pseudo-image to extract consistent object-oriented features. To enhance the stability and adaptability of extrinsic calibration to various scenes, an observability-based IoU was proposed to perform the camera-Lidar feature matching. In addition, we designed an adaptability function to evaluate the quality of scene features, thereby ensuring the effectiveness of extrinsic calibration. Results of testing on KITTI dataset and real-world dataset demonstrate that the proposed method can effectively obtain accurate extrinsic parameters in multiple scenes. By comparing with the ground truth, errors of the rotation and translation are less than (0.15deg, 0.05 m), and the variances are less than (2.0×10-3 deg2, 3.0×10-4 m2), which indicates good potential for engineering applications.
AB - Accurate extrinsic parameters are critical for fusing heterogeneous data from cameras and Lidars. To solve the problems of incomplete feature extraction, poor scene adaptability and unstable parameter convergence in conventional automatic calibration methods, this paper proposed an online extrinsic calibration method using object-oriented semantic features, which consisted of a coarse-calibration based on trajectory-consistency and a fine-calibration based on feature-consistency. With the initial parameters from the coarse-calibration, fine-calibration pictorialized the sparse point cloud to obtain its depth, intensity and class attributes simultaneously. Then, the SAM network was used to process both the visual image and the point cloud pseudo-image to extract consistent object-oriented features. To enhance the stability and adaptability of extrinsic calibration to various scenes, an observability-based IoU was proposed to perform the camera-Lidar feature matching. In addition, we designed an adaptability function to evaluate the quality of scene features, thereby ensuring the effectiveness of extrinsic calibration. Results of testing on KITTI dataset and real-world dataset demonstrate that the proposed method can effectively obtain accurate extrinsic parameters in multiple scenes. By comparing with the ground truth, errors of the rotation and translation are less than (0.15deg, 0.05 m), and the variances are less than (2.0×10-3 deg2, 3.0×10-4 m2), which indicates good potential for engineering applications.
KW - Lidar
KW - camera
KW - extrinsic calibration
KW - semantic segmentation
KW - sensor fusion
UR - https://www.scopus.com/pages/publications/105030722031
U2 - 10.1109/JSEN.2026.3663294
DO - 10.1109/JSEN.2026.3663294
M3 - 文章
AN - SCOPUS:105030722031
SN - 1530-437X
JO - IEEE Sensors Journal
JF - IEEE Sensors Journal
ER -