Scene classification with recurrent attention of VHR remote sensing images

Qi Wang, Shaoteng Liu, Jocelyn Chanussot, Xuelong Li

科研成果: 期刊稿件文章同行评审

558 引用 (Scopus)

摘要

Scene classification of remote sensing images has drawn great attention because of its wide applications. In this paper, with the guidance of the human visual system (HVS), we explore the attention mechanism and propose a novel end-to-end attention recurrent convolutional network (ARCNet) for scene classification. It can learn to focus selectively on some key regions or locations and just process them at high-level features, thereby discarding the noncritical information and promoting the classification performance. The contributions of this paper are threefold. First, we design a novel recurrent attention structure to squeeze high-level semantic and spatial features into several simplex vectors for the reduction of learning parameters. Second, an end-to-end network named ARCNet is proposed to adaptively select a series of attention regions and then to generate powerful predictions by learning to process them sequentially. Third, we construct a new data set named OPTIMAL-31, which contains more categories than popular data sets and gives researchers an extra platform to validate their algorithms. The experimental results demonstrate that our model makes great promotion in comparison with the state-of-the-art approaches.

源语言英语
文章编号8454883
页(从-至)1155-1167
页数13
期刊IEEE Transactions on Geoscience and Remote Sensing
57
2
DOI
出版状态已出版 - 2月 2019

指纹

探究 'Scene classification with recurrent attention of VHR remote sensing images' 的科研主题。它们共同构成独一无二的指纹。

引用此