TY - GEN
T1 - Ev3DGS
T2 - 2024 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2024
AU - Huang, Junwu
AU - Wan, Zhexiong
AU - Lu, Zhicheng
AU - Zhu, Juanjuan
AU - He, Mingyi
AU - Dai, Yuchao
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The novel view synthesis task involves inputting a source image, a source pose and a target pose, and rendering to generate a corresponding target image. However, obtaining a clear novel view synthesized image from only a set of blurred images and corresponding poses is a challenging problem. To solve this problem, the good performance of 3D Gaussian Splatting (3DGS) in the field of 3D scene reconstruction is taken into account, as well as the remarkable effectiveness of event cameras in the deblurring problem. Inspired by the novel Event-Enhanced Neural Radiance Fields (E2NeRF) model, which is also based on event enhancement, a new 3D reconstruction framework, Event-Enhanced 3DGS (Ev3DGS), based on 3DGS is proposed by utilizing the combined data from event cameras and standard RGB cameras. We effectively introduce the event stream into the 3D Gaussian iterative process by constructing the blur rendering loss and event rendering loss, which guides the optimization of the network structure by predicting the blurred image and event generation processes. Compared with the E2NeRF model, the proposed Ev3DGS framework in this paper effectively improves the rendering performance and reduces the training time consumed. Ev3DGS not only achieves image deblurring, but also realizes high-quality of novel view synthesis. Extensive experiments on both synthetic and real-world datasets show that Ev3DGS can effectively learn clear 3DGS from blurred image inputs, making 3DGS more robust. Our code and the datasets used are publicly available at https://github.com/HuuuangJW/Ev3DGS.
AB - The novel view synthesis task involves inputting a source image, a source pose and a target pose, and rendering to generate a corresponding target image. However, obtaining a clear novel view synthesized image from only a set of blurred images and corresponding poses is a challenging problem. To solve this problem, the good performance of 3D Gaussian Splatting (3DGS) in the field of 3D scene reconstruction is taken into account, as well as the remarkable effectiveness of event cameras in the deblurring problem. Inspired by the novel Event-Enhanced Neural Radiance Fields (E2NeRF) model, which is also based on event enhancement, a new 3D reconstruction framework, Event-Enhanced 3DGS (Ev3DGS), based on 3DGS is proposed by utilizing the combined data from event cameras and standard RGB cameras. We effectively introduce the event stream into the 3D Gaussian iterative process by constructing the blur rendering loss and event rendering loss, which guides the optimization of the network structure by predicting the blurred image and event generation processes. Compared with the E2NeRF model, the proposed Ev3DGS framework in this paper effectively improves the rendering performance and reduces the training time consumed. Ev3DGS not only achieves image deblurring, but also realizes high-quality of novel view synthesis. Extensive experiments on both synthetic and real-world datasets show that Ev3DGS can effectively learn clear 3DGS from blurred image inputs, making 3DGS more robust. Our code and the datasets used are publicly available at https://github.com/HuuuangJW/Ev3DGS.
UR - http://www.scopus.com/inward/record.url?scp=85218197706&partnerID=8YFLogxK
U2 - 10.1109/APSIPAASC63619.2025.10848695
DO - 10.1109/APSIPAASC63619.2025.10848695
M3 - 会议稿件
AN - SCOPUS:85218197706
T3 - APSIPA ASC 2024 - Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2024
BT - APSIPA ASC 2024 - Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 3 December 2024 through 6 December 2024
ER -