Enhancing SAR-ATR Systems' Resistance to S2M Attacks via FUA: Optimizing Surrogate Models for Adversarial Example Transferability

Xiaying Jin, Shuangju Zhou, Chenyu Wang, Mingxin Fu, Quan Pan, Yang Li

Research output: Contribution to journalArticlepeer-review

Abstract

The vulnerability of synthetic aperture radar (SAR) - automatic target recognition (ATR) models based on deep neural networks has garnered increasing attention in recent research. A novel and extreme prior-knowledge-limited attack scenario, synthetic-to-measured (S2M), has been proposed, where the architecture, parameters, training data, and outputs of the target model remain entirely unknown, and adversarial perturbations are generated exclusively using synthetic data. To address the challenges posed by the S2M attack scenario, we propose FUA, a comprehensive framework to improve the transferability of adversarial examples generated by SAR-ATR surrogate models. By introducing an S2M transferability estimation between the surrogate and target models, FUA progressively optimizes the surrogate model from three aspects: model parameters, data distribution and model architecture. First, Fine-tuning phase provides the suitable initial model parameters. Then, Uniform data distribution fine-tuning phase generates a uniform substitute dataset with a decision boundary smoothing loss to further fine-tune the surrogate model. Finally, Architecture modification phase modifies the activation functions and skip connections of the model architecture with the parameters fixed. Experimental results demonstrate that FUA can outperform SOTA methods and significantly improve the S2M transferability across various adversarial attack algorithms. The optimization strategies at each phase can contribute to overall performance improvements.

Keywords

  • Adversarial attack
  • Black-box attack
  • Synthetic aperture radar
  • Transfer attack

Fingerprint

Dive into the research topics of 'Enhancing SAR-ATR Systems' Resistance to S2M Attacks via FUA: Optimizing Surrogate Models for Adversarial Example Transferability'. Together they form a unique fingerprint.

Cite this