SPNet: Siamese-Prototype Network for Few-Shot Remote Sensing Image Scene Classification

Gong Cheng, Liming Cai, Chunbo Lang, Xiwen Yao, Jinyong Chen, Lei Guo, Junwei Han

科研成果: 期刊稿件文章同行评审

138 引用 (Scopus)

摘要

Few-shot image classification has attracted extensive attention, which aims to recognize unseen classes given only a few labeled samples. Due to the large intraclass variances and interclass similarity of remote sensing scenes, the task under such circumstance is much more challenging than general few-shot image classification. Most existing prototype-based few-shot algorithms usually calculate prototypes directly from support samples and ignore the validity of prototypes, which results in a decline in the accuracy of subsequent inferences based on prototypes. To tackle this problem, we propose a Siamese-prototype network (SPNet) with prototype self-calibration (SC) and intercalibration (IC). First, to acquire more accurate prototypes, we utilize the supervision information from support labels to calibrate the prototypes generated from support features. This process is called SC. Second, we propose to consider the confidence scores of the query samples as another type of prototypes, which are then used to predict the support samples in the same way. Thus, the information interaction between support and query samples is implicitly a further calibration for prototypes (so-called IC). Our model is optimized with three losses, of which two additional losses help the model to learn more representative prototypes and make more accurate predictions. With no additional parameters to be learned, our model is very lightweight and convenient to employ. The experiments on three public remote sensing image datasets demonstrate competitive performance compared with other advanced few-shot image classification approaches. The source code is available at https://github.com/zoraup/SPNet.

指纹

探究 'SPNet: Siamese-Prototype Network for Few-Shot Remote Sensing Image Scene Classification' 的科研主题。它们共同构成独一无二的指纹。

引用此