VDG: Vision-Only Dynamic Gaussian for Driving Simulation

Hao Li, Jingfeng Li, Dingwen Zhang, Chenming Wu, Jieqi Shi, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, Junwei Han

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advances in dynamic Gaussian splatting have significantly improved scene reconstruction and novel-view synthesis. However, existing methods often rely on pre-computed camera poses and Gaussian initialization using Structure from Motion (SfM) or other costly sensors, limiting their scalability. In this paper, we propose Vision-only Dynamic Gaussian (VDG), a novel method that, for the first time, integrates self-supervised visual odometry (VO) into a pose-free dynamic Gaussian splatting framework. Given the reason that estimated poses are not accurate enough to perform self-decomposition for dynamic scenes, we specifically design motion supervision, enabling precise static-dynamic decomposition and modeling of dynamic objects via dynamic Gaussians. Extensive experiments on urban driving datasets, including KITTI and Waymo, show that VDG consistently outperforms state-of-the-art dynamic view synthesis methods in both reconstruction accuracy and pose prediction with only image input. Project page:https://3daigc.github.io/VDG/.

Original languageEnglish
JournalIEEE Robotics and Automation Letters
DOIs
StateAccepted/In press - 2025

Keywords

  • Computer Vision for Transportation
  • Intelligent Transportation Systems
  • Simulation and Animation

Fingerprint

Dive into the research topics of 'VDG: Vision-Only Dynamic Gaussian for Driving Simulation'. Together they form a unique fingerprint.

Cite this