A dual-model architecture with grouping-attention-fusion for remote sensing scene classification

Junge Shen, Tong Zhang, Yichen Wang, Ruxin Wang, Qi Wang, Min Qi

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

Remote sensing images contain complex backgrounds and multi-scale objects, which pose a challenging task for scene classification. The performance is highly dependent on the capacity of the scene representation as well as the discriminability of the classifier. Although multiple models possess better properties than a single model on these aspects, the fusion strategy for these models is a key component to maximize the final accuracy. In this paper, we construct a novel dualmodel architecture with a grouping-attention-fusion strategy to improve the performance of scene classification. Specifically, the model employs two different convolutional neural networks (CNNs) for feature extraction, where the grouping-attention-fusion strategy is used to fuse the features of the CNNs in a fine and multi-scale manner. In this way, the resultant feature representation of the scene is enhanced. Moreover, to address the issue of similar appearances between different scenes, we develop a loss function which encourages small intra-class diversities and large inter-class distances. Extensive experiments are conducted on four scene classification datasets include the UCM land-use dataset, the WHU-RS19 dataset, the AID dataset, and the OPTIMAL-31 dataset. The experimental results demonstrate the superiority of the proposed method in comparison with the state-of-the-arts.

Original languageEnglish
Article number433
Pages (from-to)1-19
Number of pages19
JournalRemote Sensing
Volume13
Issue number3
DOIs
StatePublished - Jan 2021

Keywords

  • Dual-model architecture
  • Grouping-attention-fusion
  • Remote sensing
  • Scene classification

Fingerprint

Dive into the research topics of 'A dual-model architecture with grouping-attention-fusion for remote sensing scene classification'. Together they form a unique fingerprint.

Cite this