Task-specific contrastive learning for few-shot remote sensing image scene classification

Qingjie Zeng, Jie Geng

科研成果: 期刊稿件文章同行评审

70 引用 (Scopus)

摘要

Deep neural network has been successfully applied to remote sensing image scene classification, which requires a large amount of annotated data for training. However, it is time-consuming and labor-intensive to obtain abundant labeled samples in various applications. Therefore, it is of great importance to conduct scene classification with only a few annotated images. In order to address the issue, we propose a task-specific contrastive learning (TSC) model for few-shot scene classification of remote sensing images, which aims to enhance the scene classification performance with fewer labeled samples. Specifically, a self-attention and mutual-attention module (SMAM) is developed to learn feature correlations and reduce the background interference. Moreover, a task-specific contrastive loss function is proposed to optimize the deep network, which generates task-specific paired data based on different views of original images. This strategy has a contribution to enhance the discrimination of features between intra-class and inter-class images. Experimental results on NWPU-RESISC45, WHU-RS19 and UCM datasets demonstrate that the proposed method produces superior accuracies compared with other related few-shot learning methods.

源语言英语
页(从-至)143-154
页数12
期刊ISPRS Journal of Photogrammetry and Remote Sensing
191
DOI
出版状态已出版 - 9月 2022

指纹

探究 'Task-specific contrastive learning for few-shot remote sensing image scene classification' 的科研主题。它们共同构成独一无二的指纹。

引用此