TY - GEN
T1 - Neural Deformable Voxel Grid for Fast Optimization of Dynamic View Synthesis
AU - Guo, Xiang
AU - Chen, Guanying
AU - Dai, Yuchao
AU - Ye, Xiaoqing
AU - Sun, Jiadai
AU - Tan, Xiao
AU - Ding, Errui
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Recently, Neural Radiance Fields (NeRF) is revolutionizing the task of novel view synthesis (NVS) for its superior performance. In this paper, we propose to synthesize dynamic scenes. Extending the methods for static scenes to dynamic scenes is not straightforward as both the scene geometry and appearance change over time, especially under monocular setup. Also, the existing dynamic NeRF methods generally require a lengthy per-scene training procedure, where multi-layer perceptrons (MLP) are fitted to model both motions and radiance. In this paper, built on top of the recent advances in voxel-grid optimization, we propose a fast deformable radiance field method to handle dynamic scenes. Our method consists of two modules. The first module adopts a deformation grid to store 3D dynamic features, and a light-weight MLP for decoding the deformation that maps a 3D point in the observation space to the canonical space using the interpolated features. The second module contains a density and a color grid to model the geometry and density of the scene. The occlusion is explicitly modeled to further improve the rendering quality. Experimental results show that our method achieves comparable performance to D-NeRF using only 20 minutes for training, which is more than 70 × faster than D-NeRF, clearly demonstrating the efficiency of our proposed method.
AB - Recently, Neural Radiance Fields (NeRF) is revolutionizing the task of novel view synthesis (NVS) for its superior performance. In this paper, we propose to synthesize dynamic scenes. Extending the methods for static scenes to dynamic scenes is not straightforward as both the scene geometry and appearance change over time, especially under monocular setup. Also, the existing dynamic NeRF methods generally require a lengthy per-scene training procedure, where multi-layer perceptrons (MLP) are fitted to model both motions and radiance. In this paper, built on top of the recent advances in voxel-grid optimization, we propose a fast deformable radiance field method to handle dynamic scenes. Our method consists of two modules. The first module adopts a deformation grid to store 3D dynamic features, and a light-weight MLP for decoding the deformation that maps a 3D point in the observation space to the canonical space using the interpolated features. The second module contains a density and a color grid to model the geometry and density of the scene. The occlusion is explicitly modeled to further improve the rendering quality. Experimental results show that our method achieves comparable performance to D-NeRF using only 20 minutes for training, which is more than 70 × faster than D-NeRF, clearly demonstrating the efficiency of our proposed method.
KW - Dynamic view synthesis
KW - Fast optimization
KW - Neural radiance fields
KW - Voxel-grid representation
UR - http://www.scopus.com/inward/record.url?scp=85151065395&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-26319-4_27
DO - 10.1007/978-3-031-26319-4_27
M3 - 会议稿件
AN - SCOPUS:85151065395
SN - 9783031263187
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 450
EP - 468
BT - Computer Vision – ACCV 2022 - 16th Asian Conference on Computer Vision, 2022, Proceedings
A2 - Wang, Lei
A2 - Gall, Juergen
A2 - Chin, Tat-Jun
A2 - Sato, Imari
A2 - Chellappa, Rama
PB - Springer Science and Business Media Deutschland GmbH
T2 - 16th Asian Conference on Computer Vision, ACCV 2022
Y2 - 4 December 2022 through 8 December 2022
ER -