VDG: Vision-Only Dynamic Gaussian for Driving Simulation

Hao Li, Jingfeng Li, Dingwen Zhang, Chenming Wu, Jieqi Shi, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, Junwei Han

科研成果: 期刊稿件文章同行评审

摘要

Recent advances in dynamic Gaussian splatting have significantly improved scene reconstruction and novel-view synthesis. However, existing methods often rely on pre-computed camera poses and Gaussian initialization using Structure from Motion (SfM) or other costly sensors, limiting their scalability. In this paper, we propose Vision-only Dynamic Gaussian (VDG), a novel method that, for the first time, integrates self-supervised visual odometry (VO) into a pose-free dynamic Gaussian splatting framework. Given the reason that estimated poses are not accurate enough to perform self-decomposition for dynamic scenes, we specifically design motion supervision, enabling precise static-dynamic decomposition and modeling of dynamic objects via dynamic Gaussians. Extensive experiments on urban driving datasets, including KITTI and Waymo, show that VDG consistently outperforms state-of-the-art dynamic view synthesis methods in both reconstruction accuracy and pose prediction with only image input. Project page:https://3daigc.github.io/VDG/.

源语言英语
期刊IEEE Robotics and Automation Letters
DOI
出版状态已接受/待刊 - 2025

指纹

探究 'VDG: Vision-Only Dynamic Gaussian for Driving Simulation' 的科研主题。它们共同构成独一无二的指纹。

引用此