Generalizable 3D Gaussian Splatting for novel view synthesis

Chuyue Zhao, Xin Huang, Kun Yang, Xue Wang, Qing Wang

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

We present a generalizable 3D Gaussian Splatting (3DGS) method that can synthesize novel views of unseen scenes. Existing methods directly input image features into the parameter regression network without establishing a connection to the 3D representation, leading to inaccurate parameter predictions and artifacts in the rendered views. To address this issue, our method integrates spatial information from multiple source views. Specifically, by leveraging multi-view feature mapping to bridge 2D features with 3D representations, our method directly align the Gaussians with image features. The well-aligned features provide guidance for the accurate prediction of Gaussian parameters, thereby enhancing the ability to represent unseen scenes and alleviating artifacts caused by feature sampling ambiguity. The proposed framework is fully differentiable and allows optimizing Gaussian parameters in a feed-forward manner. After training on a large dataset of real-world scenes, our method enables novel view synthesis of unseen scenes without the need for optimization. Experimental results on real-world datasets demonstrate that our method outperforms recent novel view synthesis methods that also seek to generalize to unseen scenes.

Original languageEnglish
Article number111271
JournalPattern Recognition
Volume161
DOIs
StatePublished - May 2025

Keywords

  • 3D Gaussian Splatting
  • Generalizable scene representation
  • Image-based rendering
  • Novel view synthesis

Fingerprint

Dive into the research topics of 'Generalizable 3D Gaussian Splatting for novel view synthesis'. Together they form a unique fingerprint.

Cite this