TY - GEN
T1 - A Two-Stage Method for 3D Architecture Wireframe Reconstruction from Airborne LiDAR Point Cloud
AU - Zhang, Jiahao
AU - Liu, Qi
AU - Hui, Le
AU - Dai, Yuchao
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The 3D roof reconstruction from airborne LiDAR point clouds is an important mid-level vision process in computer vision. Most of the existing three-dimensional roof reconstruction models are trained and tested on simulated roof datasets. Compared with the complex real roof point cloud environment, it is difficult to regress the precise roof wireframe. Based on the Building3D Tallinn dataset, we propose a deep learning-based two-stage approach to reconstruct the roof wireframe from airborne LiDAR point cloud. Initially, the method aims to identify the locations of vertices, followed by predicting the edges connecting these vertices. In the first stage, the primary concept involves offset regression from candidate corner points and utilizing clustering to label the actual corner of the point cloud. The corner points are fully connected between two pairs to execute the second classification of the projected edge, while the Graph Neural Network is employed to extract the feature for label regression, ultimately leading to the final result. In the experiments, we provide several mainstream backbone network feature extraction methods for comparison, and the top-performing method outperformed the best approach reported by the Building3D Tallinn dataset.
AB - The 3D roof reconstruction from airborne LiDAR point clouds is an important mid-level vision process in computer vision. Most of the existing three-dimensional roof reconstruction models are trained and tested on simulated roof datasets. Compared with the complex real roof point cloud environment, it is difficult to regress the precise roof wireframe. Based on the Building3D Tallinn dataset, we propose a deep learning-based two-stage approach to reconstruct the roof wireframe from airborne LiDAR point cloud. Initially, the method aims to identify the locations of vertices, followed by predicting the edges connecting these vertices. In the first stage, the primary concept involves offset regression from candidate corner points and utilizing clustering to label the actual corner of the point cloud. The corner points are fully connected between two pairs to execute the second classification of the projected edge, while the Graph Neural Network is employed to extract the feature for label regression, ultimately leading to the final result. In the experiments, we provide several mainstream backbone network feature extraction methods for comparison, and the top-performing method outperformed the best approach reported by the Building3D Tallinn dataset.
UR - http://www.scopus.com/inward/record.url?scp=85218195398&partnerID=8YFLogxK
U2 - 10.1109/APSIPAASC63619.2025.10849258
DO - 10.1109/APSIPAASC63619.2025.10849258
M3 - 会议稿件
AN - SCOPUS:85218195398
T3 - APSIPA ASC 2024 - Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2024
BT - APSIPA ASC 2024 - Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2024
Y2 - 3 December 2024 through 6 December 2024
ER -