Scale-Aware Backprojection Transformer for Single Remote Sensing Image Super-Resolution

  • Jinglei Hao
  • , Wukai Li
  • , Yuting Lu
  • , Yang Jin
  • , Yongqiang Zhao
  • , Shunzhou Wang
  • , Binglu Wang

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

Backprojection networks have achieved promising super-resolution performance for nature images but not well be explored in the remote sensing image super-resolution (RSISR) field due to the high computation costs. In this article, we propose a scale-aware backprojection Transformer termed SPT for RSISR. SPT incorporates the backprojection learning strategy into a Transformer framework. It consists of scale-aware backprojection-based self-attention layers (SPALs) for scale-aware low-resolution feature learning and scale-aware backprojection-based Transformer blocks (SPTBs) for hierarchical feature learning. A backprojection-based reconstruction module (PRM) is also introduced to enhance the hierarchical features for image reconstruction. SPT stands out by efficiently learning low-resolution features without excessive modules for high-resolution processing, resulting in lower computational resources. Experimental results on UCMerced and AID datasets demonstrate that SPT obtains state-of-the-art results compared to other leading RSISR methods.

Original languageEnglish
Article number5649013
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume62
DOIs
StatePublished - 2024

Keywords

  • Backprojection
  • Transformer
  • multiscale feature learning
  • remote sensing image super-resolution (RSISR)

Fingerprint

Dive into the research topics of 'Scale-Aware Backprojection Transformer for Single Remote Sensing Image Super-Resolution'. Together they form a unique fingerprint.

Cite this