Abstract
Backprojection networks have achieved promising super-resolution performance for nature images but not well be explored in the remote sensing image super-resolution (RSISR) field due to the high computation costs. In this article, we propose a scale-aware backprojection Transformer termed SPT for RSISR. SPT incorporates the backprojection learning strategy into a Transformer framework. It consists of scale-aware backprojection-based self-attention layers (SPALs) for scale-aware low-resolution feature learning and scale-aware backprojection-based Transformer blocks (SPTBs) for hierarchical feature learning. A backprojection-based reconstruction module (PRM) is also introduced to enhance the hierarchical features for image reconstruction. SPT stands out by efficiently learning low-resolution features without excessive modules for high-resolution processing, resulting in lower computational resources. Experimental results on UCMerced and AID datasets demonstrate that SPT obtains state-of-the-art results compared to other leading RSISR methods.
| Original language | English |
|---|---|
| Article number | 5649013 |
| Journal | IEEE Transactions on Geoscience and Remote Sensing |
| Volume | 62 |
| DOIs | |
| State | Published - 2024 |
Keywords
- Backprojection
- Transformer
- multiscale feature learning
- remote sensing image super-resolution (RSISR)
Fingerprint
Dive into the research topics of 'Scale-Aware Backprojection Transformer for Single Remote Sensing Image Super-Resolution'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver