TY - JOUR
T1 - Image Fusion via Vision-Language Model
AU - Zhao, Zixiang
AU - Deng, Lilun
AU - Bai, Haowen
AU - Cui, Yukun
AU - Zhang, Zhipeng
AU - Zhang, Yulun
AU - Qin, Haotong
AU - Chen, Dongdong
AU - Zhang, Jiangshe
AU - Wang, Peng
AU - Gool, Luc Van
N1 - Publisher Copyright:
Copyright 2024 by the author(s)
PY - 2024
Y1 - 2024
N2 - Image fusion integrates essential information from multiple images into a single composite, enhancing structures, textures, and refining imperfections.Existing methods predominantly focus on pixel-level and semantic visual features for recognition, but often overlook the deeper text-level semantic information beyond vision.Therefore, we introduce a novel fusion paradigm named image Fusion via vIsion-Language Model (FILM), for the first time, utilizing explicit textual information from source images to guide the fusion process.Specifically, FILM generates semantic prompts from images and inputs them into ChatGPT for comprehensive textual descriptions.These descriptions are fused within the textual domain and guide the visual information fusion, enhancing feature extraction and contextual understanding, directed by textual semantic information via cross-attention.FILM has shown promising results in four image fusion tasks: infrared-visible, medical, multi-exposure, and multi-focus image fusion.We also propose a vision-language dataset containing ChatGPT-generated paragraph descriptions for the eight image fusion datasets across four fusion tasks, facilitating future research in vision-language model-based image fusion.Code and dataset are available at https://github.com/Zhaozixiang1228/IF-FILM.
AB - Image fusion integrates essential information from multiple images into a single composite, enhancing structures, textures, and refining imperfections.Existing methods predominantly focus on pixel-level and semantic visual features for recognition, but often overlook the deeper text-level semantic information beyond vision.Therefore, we introduce a novel fusion paradigm named image Fusion via vIsion-Language Model (FILM), for the first time, utilizing explicit textual information from source images to guide the fusion process.Specifically, FILM generates semantic prompts from images and inputs them into ChatGPT for comprehensive textual descriptions.These descriptions are fused within the textual domain and guide the visual information fusion, enhancing feature extraction and contextual understanding, directed by textual semantic information via cross-attention.FILM has shown promising results in four image fusion tasks: infrared-visible, medical, multi-exposure, and multi-focus image fusion.We also propose a vision-language dataset containing ChatGPT-generated paragraph descriptions for the eight image fusion datasets across four fusion tasks, facilitating future research in vision-language model-based image fusion.Code and dataset are available at https://github.com/Zhaozixiang1228/IF-FILM.
UR - http://www.scopus.com/inward/record.url?scp=85203847918&partnerID=8YFLogxK
M3 - 会议文章
AN - SCOPUS:85203847918
SN - 2640-3498
VL - 235
SP - 60749
EP - 60765
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 41st International Conference on Machine Learning, ICML 2024
Y2 - 21 July 2024 through 27 July 2024
ER -