PR-RRN: Pairwise-Regularized Residual-Recursive Networks for Non-rigid Structure-from-Motion

Haitian Zeng, Yuchao Dai, Xin Yu, Xiaohan Wang, Yi Yang

科研成果: 书/报告/会议事项章节会议稿件同行评审

10 引用 (Scopus)

摘要

We propose PR-RRN, a novel neural-network based method for Non-rigid Structure-from-Motion (NRSfM). PR-RRN consists of Residual-Recursive Networks (RRN) and two extra regularization losses. RRN is designed to effectively recover 3D shape and camera from 2D keypoints with novel residual-recursive structure. As NRSfM is a highly under-constrained problem, we propose two new pairwise regularization to further regularize the reconstruction. The Rigidity-based Pairwise Contrastive Loss regularizes the shape representation by encouraging higher similarity between the representations of high-rigidity pairs of frames than low-rigidity pairs. We propose minimum singular-value ratio to measure the pairwise rigidity. The Pairwise Consistency Loss enforces the reconstruction to be consistent when the estimated shapes and cameras are exchanged between pairs. Our approach achieves state-of-the-art performance on CMU MOCAP and PASCAL3D+ dataset.

源语言英语
主期刊名Proceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021
出版商Institute of Electrical and Electronics Engineers Inc.
5580-5589
页数10
ISBN(电子版)9781665428125
DOI
出版状态已出版 - 2021
活动18th IEEE/CVF International Conference on Computer Vision, ICCV 2021 - Virtual, Online, 加拿大
期限: 11 10月 202117 10月 2021

出版系列

姓名Proceedings of the IEEE International Conference on Computer Vision
ISSN(印刷版)1550-5499

会议

会议18th IEEE/CVF International Conference on Computer Vision, ICCV 2021
国家/地区加拿大
Virtual, Online
时期11/10/2117/10/21

指纹

探究 'PR-RRN: Pairwise-Regularized Residual-Recursive Networks for Non-rigid Structure-from-Motion' 的科研主题。它们共同构成独一无二的指纹。

引用此