TY - GEN
T1 - An Infrared and Visible Image Fusion Method Based on Non-Subsampled Contourlet Transform and Joint Sparse Representation
AU - He, Guiqing
AU - Dong, Dandan
AU - Xia, Zhaoqiang
AU - Xing, Siyuan
AU - Wei, Yijing
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2017/5/1
Y1 - 2017/5/1
N2 - In conventional fusion methods based on NonSubsampled Contourlet Transform (NSCT), low-frequency subband coefficient of an image fails to express sparsely the image's low-frequency information, not in favor of extracting source image features. To address this issue, an infrared and visible image fusion method based on NSCT and joint sparse representation (JSR) was proposed, in which, JSR transform of the image's low-frequency information is conducive to improving sparsity of low-frequency subband containing main energy of the image, as to high-frequency information, use of feature product as a fusion rule is beneficial to extract detail feature of the source image. Experimental result indicates that, compared with conventional multiscale transform-based DWT, NSCT-based fusion method and sparse representation-based SR and JSR algorithms, the method in this paper achieved better fusion effect, capable of keeping target information of the infrared image and background detail information (edge, texture, etc.) of the visible image better.
AB - In conventional fusion methods based on NonSubsampled Contourlet Transform (NSCT), low-frequency subband coefficient of an image fails to express sparsely the image's low-frequency information, not in favor of extracting source image features. To address this issue, an infrared and visible image fusion method based on NSCT and joint sparse representation (JSR) was proposed, in which, JSR transform of the image's low-frequency information is conducive to improving sparsity of low-frequency subband containing main energy of the image, as to high-frequency information, use of feature product as a fusion rule is beneficial to extract detail feature of the source image. Experimental result indicates that, compared with conventional multiscale transform-based DWT, NSCT-based fusion method and sparse representation-based SR and JSR algorithms, the method in this paper achieved better fusion effect, capable of keeping target information of the infrared image and background detail information (edge, texture, etc.) of the visible image better.
KW - Feature product
KW - Image fusion
KW - Infrared and visible images
KW - Joint sparse
KW - Nonsubsampled Contourlet transform
UR - http://www.scopus.com/inward/record.url?scp=85020236527&partnerID=8YFLogxK
U2 - 10.1109/iThings-GreenCom-CPSCom-SmartData.2016.115
DO - 10.1109/iThings-GreenCom-CPSCom-SmartData.2016.115
M3 - 会议稿件
AN - SCOPUS:85020236527
T3 - Proceedings - 2016 IEEE International Conference on Internet of Things; IEEE Green Computing and Communications; IEEE Cyber, Physical, and Social Computing; IEEE Smart Data, iThings-GreenCom-CPSCom-Smart Data 2016
SP - 492
EP - 497
BT - Proceedings - 2016 IEEE International Conference on Internet of Things; IEEE Green Computing and Communications; IEEE Cyber, Physical, and Social Computing; IEEE Smart Data, iThings-GreenCom-CPSCom-Smart Data 2016
A2 - Liu, Xingang
A2 - Qiu, Tie
A2 - Li, Yayong
A2 - Guo, Bin
A2 - Ning, Zhaolong
A2 - Lu, Kaixuan
A2 - Dong, Mianxiong
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th IEEE International Conference on Internet of Things, 12th IEEE International Conference on Green Computing and Communications, 9th IEEE International Conference on Cyber, Physical, and Social Computing and 2016 IEEE International Conference on Smart Data, iThings-GreenCom-CPSCom-Smart Data 2016
Y2 - 16 December 2016 through 19 December 2016
ER -