基于场景上下文感知的光学遥感图像分类方法

Xinyi Guo, Ke Zhang, Zhengyu Guo, Yu Su

科研成果: 期刊稿件文章同行评审

摘要

Optical remote sensing image classification is one of the key technologies in the field of Earth observation. In recent years, researchers have proposed optical remote sensing image classification using deep neural networks. Aiming at the problem of inadequate feature extraction in some network models, this paper proposes a remote sensing image classification method based on scene context perception and attention enhancement, called ScEfficientNet. This method designs a scene context-driven module (SCDM) to model the spatial relationship between the target and its surrounding neighborhood, enhancing the original feature representation with scene context features. It introduces a convolutional block attention module (CBAM) to weight the feature maps based on the importance of channels and spatial locations, and combines it with a depth-wise separable convolution structure to extract discriminative information of the targets, referred to as ScMBConv. Based on the above works, the ScEfficientNet model, which incorporates scene context perception and attention enhancement, is used for remote sensing image classification. Experimental results show that ScEfficientNet achieves an accuracy of 96.8% in AID dataset, which is a 3.3% improvement over the original network, with a parameter count of 5.55 M. The overall performance is superior to other image classification algorithms such as VGGNet19, GoogLeNet and ViT-B, confirming the effectiveness of the ScEfficientNet model.

投稿的翻译标题Optical Remote Sensing Image Classification Method Based on Scene Context Perception
源语言繁体中文
页(从-至)94-100
页数7
期刊Aero Weaponry
31
3
DOI
出版状态已出版 - 30 6月 2024

关键词

  • convolutional neural network
  • EfficientNet
  • image classification
  • optical remote sensing image

指纹

探究 '基于场景上下文感知的光学遥感图像分类方法' 的科研主题。它们共同构成独一无二的指纹。

引用此