View-Semantic Transformer with Enhancing Diversity for Sparse-View SAR Target Recognition

Zhunga Liu, Feiyan Wu, Zaidao Wen, Zuowei Zhang

科研成果: 期刊稿件文章同行评审

3 引用 (Scopus)

摘要

With the rapid development of supervised learning-based synthetic aperture radar (SAR) target recognition technology, it is easy to find that the recognition performance is proportional to the number of training samples. However, the biased data distribution and under-representation of the model caused by incomplete data within categories exacerbate the challenge of SAR interpretation. In this article, we propose a new view-semantic transformer network (VSTNet) that generates synthesized samples to complete the statistical distribution of training data and improve the discriminative representation of the model. First, SAR images from different views are encoded into a disentangled latent space, which allows us to synthesize data with more diverse views by manipulating view-semantic features. Second, the synthesized data as a complement effectively expands the training set and alleviates the overfitting problem of limited data in sparse views. Third, the proposed method unifies SAR image synthesis and SAR target recognition into an end-to-end framework to boost their performance against each other. Experiments conducted on moving and stationary target acquisition and recognition (MSTAR) data demonstrate the robustness and effectiveness of the proposed method.

源语言英语
文章编号5211610
期刊IEEE Transactions on Geoscience and Remote Sensing
61
DOI
出版状态已出版 - 2023

指纹

探究 'View-Semantic Transformer with Enhancing Diversity for Sparse-View SAR Target Recognition' 的科研主题。它们共同构成独一无二的指纹。

引用此