摘要
Transformation equivariance has been widely investigated in 3D point cloud representation learning for more informative descriptors, which formulates the change of the representation with respect to the transformation of the input point clouds explicitly. In this paper, we extend this property to the task of 3D point cloud registration and propose a rigid transformation equivariance (RTE) for accurate 3D point cloud registration. Specifically, RTE formulates the change of the relative pose explicitly with respect to the rigid transformation of the input point clouds. To exploit RTE, we adopt a Siamese structure network with two shared registration branches. One focuses on the input pair of point clouds, and the other one focuses on the new pair achieved by applying two random rigid transformations to the input point clouds respectively. Since the change of the two output relative poses has been predicted according to RTE, a new additional self-supervised loss is obtained to supervise the training. This general network structure can be integrated with most learning-based point cloud registration frameworks easily to improve the performance. Our method adopts the state-of-the-art virtual point-based pipelines as our shared branches, in which we propose a data-driven matching based on learned cost volume (LCV) rather than traditional hand-crafted matching strategies. Experimental evaluations on both synthetic datasets and real datasets validate the effectiveness of our proposed framework. The source code will be made public.
源语言 | 英语 |
---|---|
文章编号 | 108784 |
期刊 | Pattern Recognition |
卷 | 130 |
DOI | |
出版状态 | 已出版 - 10月 2022 |