SGMA: a novel adversarial attack approach with improved transferability

Peican Zhu, Jinbang Hong, Xingyu Li, Keke Tang, Zhen Wang

Research output: Contribution to journalArticlepeer-review

14 Scopus citations

Abstract

Deep learning models are easily deceived by adversarial examples, and transferable attacks are crucial because of the inaccessibility of model information. Existing SOTA attack approaches tend to destroy important features of objects to generate adversarial examples. This paper proposes the split grid mask attack (SGMA), which reduces the intensity of model-specific features by split grid mask transformation, effectively highlighting the important features of the input image. Perturbing these important features can guide the development of adversarial examples in a more transferable direction. Specifically, we introduce the split grid mask transformation into the input image. Due to the vulnerability of model-specific features to image transformations, the intensity of model-specific features decreases after aggregation while the intensities of important features remain. The generated adversarial examples guided by destroying important features have excellent transferability. Extensive experimental results demonstrate the effectiveness of the proposed SGMA. Compared to the SOTA attack approaches, our method improves the black-box attack success rates by an average of 6.4% and 8.2% against the normally trained models and the defense ones respectively.

Original languageEnglish
Pages (from-to)6051-6063
Number of pages13
JournalComplex and Intelligent Systems
Volume9
Issue number5
DOIs
StatePublished - Oct 2023

Keywords

  • Adversarial examples
  • Deep neural networks
  • Feature-level attack
  • Transferable attack

Fingerprint

Dive into the research topics of 'SGMA: a novel adversarial attack approach with improved transferability'. Together they form a unique fingerprint.

Cite this