TY - GEN
T1 - 3D Reconstruction and Rendering Based on Improved Neural Radiance Field
AU - Wan, Xiaona
AU - Xu, Ziyun
AU - Kang, Jian
AU - Feng, Xiaoyi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In this paper, we propose a 3D reconstruction and rendering method based on the improved neural radiance fields. To address the inefficiency of the spatial sampling process in the neural radiance fields, we employ a depth-guided light sampling method to obtain more accurate 3D points. We propose a monocular geometrically supervised volume rendering method to address the missing depth and normal information. This method uses the depth map and normal map estimated by the dual-stream network as input, allowing the neural radiance fields to learn more depth and normal information. Simultaneously, to tackle the issue of imprecise pose estimation, we propose a strategy for inter-frame constraints to enhance the optimization of the camera pose by utilizing geometric consistency and photometric consistency losses. Compared with other algorithms based on neural radiance fields, this method improves the new view rendering index PSNR by at least 0.355 on average and the pose estimation index ATE by at least 0.025m on average in the ScanNet and Tanks and Temples datasets. Additionally, Compared with Nerfacto, a neural radiance fields improvement algorithm, this method improves the F-score index of the 3D point cloud in the Tanks and Temples dataset by 1.49.
AB - In this paper, we propose a 3D reconstruction and rendering method based on the improved neural radiance fields. To address the inefficiency of the spatial sampling process in the neural radiance fields, we employ a depth-guided light sampling method to obtain more accurate 3D points. We propose a monocular geometrically supervised volume rendering method to address the missing depth and normal information. This method uses the depth map and normal map estimated by the dual-stream network as input, allowing the neural radiance fields to learn more depth and normal information. Simultaneously, to tackle the issue of imprecise pose estimation, we propose a strategy for inter-frame constraints to enhance the optimization of the camera pose by utilizing geometric consistency and photometric consistency losses. Compared with other algorithms based on neural radiance fields, this method improves the new view rendering index PSNR by at least 0.355 on average and the pose estimation index ATE by at least 0.025m on average in the ScanNet and Tanks and Temples datasets. Additionally, Compared with Nerfacto, a neural radiance fields improvement algorithm, this method improves the F-score index of the 3D point cloud in the Tanks and Temples dataset by 1.49.
KW - 3D Reconstruction
KW - Dual-stream network
KW - Inter-frame constraints
KW - Neural radiance fields
UR - http://www.scopus.com/inward/record.url?scp=85199440004&partnerID=8YFLogxK
U2 - 10.1109/ICIPMC62364.2024.10586710
DO - 10.1109/ICIPMC62364.2024.10586710
M3 - 会议稿件
AN - SCOPUS:85199440004
T3 - 2024 3rd International Conference on Image Processing and Media Computing, ICIPMC 2024
SP - 120
EP - 126
BT - 2024 3rd International Conference on Image Processing and Media Computing, ICIPMC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd International Conference on Image Processing and Media Computing, ICIPMC 2024
Y2 - 17 May 2024 through 19 May 2024
ER -