Learning Spectral Cues for Multispectral and Panchromatic Image Fusion

Yinghui Xing, Shuyuan Yang, Yan Zhang, Yanning Zhang

科研成果: 期刊稿件文章同行评审

4 引用 (Scopus)

摘要

Recently, deep learning based multispectral (MS) and panchromatic (PAN) image fusion methods have been proposed, which extracted features automatically and hierarchically by a series of non-linear transformations to model the complicated imaging discrepancy. But they always pay more attention to the extraction and compensation of spatial details and use the mean squared error or mean absolute error as a loss function, regardless of the preservation of spectral information contained in multispectral images. For the sake of the improvements in both spatial and spectral resolution, this paper presents a novel fusion model that takes the spectral preservation into consideration, and learns the spectral cues from the process of generating a spectrally refined multispectral image, which is constrained by a spectral loss between the generated image and the reference image. Then these spectral cues are used to modulate the PAN features to obtain final fusion result. Experimental results on reduced-resolution and full-resolution datasets demonstrate that the proposed method can obtain a better fusion result in terms of visual inspection and evaluation indices when compared with current state-of-the-art methods.

源语言英语
页(从-至)6964-6975
页数12
期刊IEEE Transactions on Image Processing
31
DOI
出版状态已出版 - 2022

指纹

探究 'Learning Spectral Cues for Multispectral and Panchromatic Image Fusion' 的科研主题。它们共同构成独一无二的指纹。

引用此