TY - JOUR
T1 - Infrared and visible image fusion using a shallow CNN and structural similarity constraint
AU - Li, Lei
AU - Xia, Zhaoqiang
AU - Han, Huijian
AU - He, Guiqing
AU - Roli, Fabio
AU - Feng, Xiaoyi
N1 - Publisher Copyright:
© 2020 Institution of Engineering and Technology. All rights reserved.
PY - 2020/12/1
Y1 - 2020/12/1
N2 - In recent years, image fusion methods based on deep networks have been proposed to combine infrared and visible images for achieving better fusion image. However, issues such as limited training data, scarce reference images and misalignment of multi-source images, still limit the fusion performance. To address these problems, we propose an end-to-end shallow convolutional neural network with structural constraints, which has only one convolutional layer to fuse infrared and visible images. Different from other methods, our proposed model requires less training data and reference images and is more robust to the misalignment of a couple of images. More specifically, the infrared image and the visible image are first provided as inputs to a convolutional layer to extract the information that should be fused; then, all feature maps are concatenated together and fed into a convolutional layer with one channel to obtain the fused image; finally, a structural similarity loss between the fused image and the input infrared and visible images is computed to update the network parameters and eliminate the effects of pixel misalignment. Extensive experiments show the effectiveness of our proposed method on fusion of infrared and visible images with the performance that outperforms the state-of-the-art methods.
AB - In recent years, image fusion methods based on deep networks have been proposed to combine infrared and visible images for achieving better fusion image. However, issues such as limited training data, scarce reference images and misalignment of multi-source images, still limit the fusion performance. To address these problems, we propose an end-to-end shallow convolutional neural network with structural constraints, which has only one convolutional layer to fuse infrared and visible images. Different from other methods, our proposed model requires less training data and reference images and is more robust to the misalignment of a couple of images. More specifically, the infrared image and the visible image are first provided as inputs to a convolutional layer to extract the information that should be fused; then, all feature maps are concatenated together and fed into a convolutional layer with one channel to obtain the fused image; finally, a structural similarity loss between the fused image and the input infrared and visible images is computed to update the network parameters and eliminate the effects of pixel misalignment. Extensive experiments show the effectiveness of our proposed method on fusion of infrared and visible images with the performance that outperforms the state-of-the-art methods.
UR - http://www.scopus.com/inward/record.url?scp=85098724339&partnerID=8YFLogxK
U2 - 10.1049/iet-ipr.2020.0360
DO - 10.1049/iet-ipr.2020.0360
M3 - 文章
AN - SCOPUS:85098724339
SN - 1751-9659
VL - 14
SP - 3562
EP - 3571
JO - IET Image Processing
JF - IET Image Processing
IS - 14
ER -