VDG: Vision-Only Dynamic Gaussian for Driving Simulation

Hao Li, Jingfeng Li, Dingwen Zhang, Chenming Wu, Jieqi Shi, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, Junwei Han

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Recent advances in dynamic Gaussian splatting have significantly improved scene reconstruction and novel-view synthesis. However, existing methods often rely on pre-computed camera poses and Gaussian initialization using Structure from Motion (SfM) or other costly sensors, limiting their scalability. In this letter, we propose Vision-only Dynamic Gaussian (VDG), a novel method that, for the first time, integrates self-supervised visual odometry (VO) into a pose-free dynamic Gaussian splatting framework. Given the reason that estimated poses are not accurate enough to perform self-decomposition for dynamic scenes, we specifically design motion supervision, enabling precise static-dynamic decomposition and modeling of dynamic objects via dynamic Gaussians. Extensive experiments on urban driving datasets, including KITTI and Waymo, show that VDG consistently outperforms state-of-the-art dynamic view synthesis methods in both reconstruction accuracy and pose prediction with only image input.

Original languageEnglish
Pages (from-to)5138-5145
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume10
Issue number5
DOIs
StatePublished - 2025

Keywords

  • computer vision for transportation
  • intelligent transportation systems
  • Simulation and animation

Fingerprint

Dive into the research topics of 'VDG: Vision-Only Dynamic Gaussian for Driving Simulation'. Together they form a unique fingerprint.

Cite this