摘要
Backprojection networks have achieved promising super-resolution performance for nature images but not well be explored in the remote sensing image super-resolution (RSISR) field due to the high computation costs. In this article, we propose a scale-aware backprojection Transformer termed SPT for RSISR. SPT incorporates the backprojection learning strategy into a Transformer framework. It consists of scale-aware backprojection-based self-attention layers (SPALs) for scale-aware low-resolution feature learning and scale-aware backprojection-based Transformer blocks (SPTBs) for hierarchical feature learning. A backprojection-based reconstruction module (PRM) is also introduced to enhance the hierarchical features for image reconstruction. SPT stands out by efficiently learning low-resolution features without excessive modules for high-resolution processing, resulting in lower computational resources. Experimental results on UCMerced and AID datasets demonstrate that SPT obtains state-of-the-art results compared to other leading RSISR methods.
源语言 | 英语 |
---|---|
文章编号 | 5649013 |
期刊 | IEEE Transactions on Geoscience and Remote Sensing |
卷 | 62 |
DOI | |
出版状态 | 已出版 - 2024 |