TY - GEN
T1 - Brain Inspired Keypoint Matching for 3D Scene Reconstruction
AU - Zaman, Anam
AU - Yangyu, Fan
AU - Ayub, Muhammad Saad
AU - Guoyun, L. V.
AU - Shiva, Liu
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In this paper, we investigate the keypoint matching problem in a 3D scene reconstruction system. 3D scene reconstruction using a sequential set of images or video is an essential component in various virtual reality(VR) and augmented reality(AR) solutions. Keypoint matching is necessary for achieving a close to reality model of the scene using varying views. Although deep learning-based methods have been readily proposed for image matching using the keypoints and respective descriptors. These methods do not take into account the previous image matches when performing correspondence on the current pair of images. This is crucial in the presence of sequential images or frames from a video. A continual learning-based image matching framework is proposed that replicates the working of a human brain. The method efficiently extracts knowledge, stores the knowledge in its memory, and reuses it for future matches. The proposed method increases the expressiveness of the descriptors to be used for keypoint matching in the pair of images. Specifically, the methodology using a continual graph attention network to find the correspondence among keypoints in a pair of images. The methodology is thoroughly validated on a challenging benchmark dataset namely HPatches. The methodology is evaluated along with present state-of-The-Art handcrafted and learning-based image matching methods under varying confidence thresholds. The experimental results reveal that the proposed methodology outperforms all the underlying methods while achieving significant improvement.
AB - In this paper, we investigate the keypoint matching problem in a 3D scene reconstruction system. 3D scene reconstruction using a sequential set of images or video is an essential component in various virtual reality(VR) and augmented reality(AR) solutions. Keypoint matching is necessary for achieving a close to reality model of the scene using varying views. Although deep learning-based methods have been readily proposed for image matching using the keypoints and respective descriptors. These methods do not take into account the previous image matches when performing correspondence on the current pair of images. This is crucial in the presence of sequential images or frames from a video. A continual learning-based image matching framework is proposed that replicates the working of a human brain. The method efficiently extracts knowledge, stores the knowledge in its memory, and reuses it for future matches. The proposed method increases the expressiveness of the descriptors to be used for keypoint matching in the pair of images. Specifically, the methodology using a continual graph attention network to find the correspondence among keypoints in a pair of images. The methodology is thoroughly validated on a challenging benchmark dataset namely HPatches. The methodology is evaluated along with present state-of-The-Art handcrafted and learning-based image matching methods under varying confidence thresholds. The experimental results reveal that the proposed methodology outperforms all the underlying methods while achieving significant improvement.
KW - 3D scene reconstruction
KW - Continual learning
KW - Graph Neural Networks
KW - Image keypoint matching
KW - Performance Evaluation
UR - http://www.scopus.com/inward/record.url?scp=85137152524&partnerID=8YFLogxK
U2 - 10.1109/ICVR55215.2022.9847807
DO - 10.1109/ICVR55215.2022.9847807
M3 - 会议稿件
AN - SCOPUS:85137152524
T3 - International Conference on Virtual Rehabilitation, ICVR
SP - 33
EP - 40
BT - 2022 8th International Conference on Virtual Reality, ICVR 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th International Conference on Virtual Reality, ICVR 2022
Y2 - 26 May 2022 through 28 May 2022
ER -