TY - JOUR
T1 - Multiscale Factor Joint Learning for Hyperspectral Image Super-Resolution
AU - Li, Qiang
AU - Yuan, Yuan
AU - Wang, Qi
N1 - Publisher Copyright:
© 1980-2012 IEEE.
PY - 2023
Y1 - 2023
N2 - Hyperspectral image super-resolution (SR) using auxiliary RGB image has obtained great success. Currently, most methods, respectively, train single model to handle different scale factors, which may lead to the inconsistency of spatial and spectral contents when converted to the same size. In fact, the manner ignores the exploration of potential interdependence among different scale factors in a single model. To this end, we propose a multiscale factor joint learning for hyperspectral image SR (MulSR). Specifically, to take advantage of the inherent priors of spatial and spectral information, a deep architecture using single scale factor is designed in terms of symmetrical guided encoder (SGE) to explore the hyperspectral image and RGB image. Considering that there are obvious differences in texture details at various scale factors, another architecture is proposed which is basically the same as above, except that its scale factor is larger. On this basis, a multiscale information interaction (MII) unit is modeled between two architectures by a direction-aware spatial context aggregation (DSCA) module. Besides, the contents generated by the model with multiscale factor are combined to build a learnable feedback compensation correction (LFCC). The difference is fed back to the architecture with large scale factor, forming an interactive feedback joint optimization pattern. This calibrates the representation of spatial and spectral contents in the reconstruction process. Experiments on synthetic and real datasets demonstrate that our MulSR shows superior performance in terms of qualitative and quantitative aspects. Our code is publicly available at https://github.com/qianngli/MulSR.
AB - Hyperspectral image super-resolution (SR) using auxiliary RGB image has obtained great success. Currently, most methods, respectively, train single model to handle different scale factors, which may lead to the inconsistency of spatial and spectral contents when converted to the same size. In fact, the manner ignores the exploration of potential interdependence among different scale factors in a single model. To this end, we propose a multiscale factor joint learning for hyperspectral image SR (MulSR). Specifically, to take advantage of the inherent priors of spatial and spectral information, a deep architecture using single scale factor is designed in terms of symmetrical guided encoder (SGE) to explore the hyperspectral image and RGB image. Considering that there are obvious differences in texture details at various scale factors, another architecture is proposed which is basically the same as above, except that its scale factor is larger. On this basis, a multiscale information interaction (MII) unit is modeled between two architectures by a direction-aware spatial context aggregation (DSCA) module. Besides, the contents generated by the model with multiscale factor are combined to build a learnable feedback compensation correction (LFCC). The difference is fed back to the architecture with large scale factor, forming an interactive feedback joint optimization pattern. This calibrates the representation of spatial and spectral contents in the reconstruction process. Experiments on synthetic and real datasets demonstrate that our MulSR shows superior performance in terms of qualitative and quantitative aspects. Our code is publicly available at https://github.com/qianngli/MulSR.
KW - Compensation correction
KW - contextual aggregation
KW - hyperspectral image
KW - joint optimization
KW - super-resolution (SR)
UR - http://www.scopus.com/inward/record.url?scp=85171551508&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2023.3312436
DO - 10.1109/TGRS.2023.3312436
M3 - 文章
AN - SCOPUS:85171551508
SN - 0196-2892
VL - 61
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
M1 - 5523110
ER -