TY - JOUR
T1 - A Representation Separation Perspective to Correspondence-Free Unsupervised 3-D Point Cloud Registration
AU - Zhang, Zhiyuan
AU - Sun, Jiadai
AU - Dai, Yuchao
AU - Zhou, Dingfu
AU - Song, Xibin
AU - He, Mingyi
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2022
Y1 - 2022
N2 - 3-D point cloud registration in remote sensing field has been greatly advanced by deep learning-based methods, where the rigid transformation is either directly regressed from the two point clouds (correspondences-free approaches) or computed from the learned correspondences (correspondences-based approaches). Existing correspondence-free methods generally learn the holistic representation of the entire point cloud, which is fragile for partial and noisy point clouds. In this letter, we propose a correspondence-free unsupervised point cloud registration (UPCR) method from the representation separation perspective. First, we model the input point cloud as a combination of pose-invariant representation and pose-related representation. Second, the pose-related representation is used to learn the relative pose w.r.t. a 'latent canonical shape' for the source and target point clouds, respectively. Third, the rigid transformation is obtained from the above two learned relative poses. Our method not only filters out the disturbance in pose-invariant representation but also is robust to partial-to-partial point clouds or noise. Experiments on benchmark datasets demonstrate that our unsupervised method achieves comparable if not better performance than state-of-the-art supervised registration methods. The source code will be made public.
AB - 3-D point cloud registration in remote sensing field has been greatly advanced by deep learning-based methods, where the rigid transformation is either directly regressed from the two point clouds (correspondences-free approaches) or computed from the learned correspondences (correspondences-based approaches). Existing correspondence-free methods generally learn the holistic representation of the entire point cloud, which is fragile for partial and noisy point clouds. In this letter, we propose a correspondence-free unsupervised point cloud registration (UPCR) method from the representation separation perspective. First, we model the input point cloud as a combination of pose-invariant representation and pose-related representation. Second, the pose-related representation is used to learn the relative pose w.r.t. a 'latent canonical shape' for the source and target point clouds, respectively. Third, the rigid transformation is obtained from the above two learned relative poses. Our method not only filters out the disturbance in pose-invariant representation but also is robust to partial-to-partial point clouds or noise. Experiments on benchmark datasets demonstrate that our unsupervised method achieves comparable if not better performance than state-of-the-art supervised registration methods. The source code will be made public.
KW - Correspondences-free
KW - point cloud registration
KW - representation separation
KW - unsupervised
UR - http://www.scopus.com/inward/record.url?scp=85121377965&partnerID=8YFLogxK
U2 - 10.1109/LGRS.2021.3132926
DO - 10.1109/LGRS.2021.3132926
M3 - 文章
AN - SCOPUS:85121377965
SN - 1545-598X
VL - 19
JO - IEEE Geoscience and Remote Sensing Letters
JF - IEEE Geoscience and Remote Sensing Letters
ER -