TY - GEN
T1 - Edge-Guided Detector-Free Network for Robust and Accurate Visible-Thermal Image Matching
AU - Li, Yanping
AU - Qi, Zhaoshuai
AU - Zhang, Xiuwei
AU - Zhuo, Tao
AU - Liang, Yue
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Recent detector-free models strive to leverage both local and global context for image matching, showcasing enhanced robustness, particularly in scenarios with weak-textured scenes. Despite these advancements, automatically establishing feature correspondences between visible and thermal images still introduces additional challenges. Differences in radiation and geometry between these modalities often result in degraded performance for the majority of existing methods. To this end, we propose edge-guided detector-free model termed EDMatcher for visible-thermal image matching. Besides local and global context in the images, EDMatcher also leverages modality-robust structural information in image edges, which demonstrates promising robustness to images with distinct modalities. Moreover, an edge-masked ground-truth matrix generation strategy is introduced during the training, which helps EDMatcher to further focus on more salient regions while leaving out texture-less regions, leading to more efficient learning. Extensive experiments show that EDMatcher has strong generalization and achieves excellent matching performances.
AB - Recent detector-free models strive to leverage both local and global context for image matching, showcasing enhanced robustness, particularly in scenarios with weak-textured scenes. Despite these advancements, automatically establishing feature correspondences between visible and thermal images still introduces additional challenges. Differences in radiation and geometry between these modalities often result in degraded performance for the majority of existing methods. To this end, we propose edge-guided detector-free model termed EDMatcher for visible-thermal image matching. Besides local and global context in the images, EDMatcher also leverages modality-robust structural information in image edges, which demonstrates promising robustness to images with distinct modalities. Moreover, an edge-masked ground-truth matrix generation strategy is introduced during the training, which helps EDMatcher to further focus on more salient regions while leaving out texture-less regions, leading to more efficient learning. Extensive experiments show that EDMatcher has strong generalization and achieves excellent matching performances.
KW - Feature Matching
KW - Multi-modal Image Matching
KW - Vision Transformers
UR - http://www.scopus.com/inward/record.url?scp=85206571645&partnerID=8YFLogxK
U2 - 10.1109/ICME57554.2024.10688313
DO - 10.1109/ICME57554.2024.10688313
M3 - 会议稿件
AN - SCOPUS:85206571645
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
BT - 2024 IEEE International Conference on Multimedia and Expo, ICME 2024
PB - IEEE Computer Society
T2 - 2024 IEEE International Conference on Multimedia and Expo, ICME 2024
Y2 - 15 July 2024 through 19 July 2024
ER -