SGMA: a novel adversarial attack approach with improved transferability

Peican Zhu, Jinbang Hong, Xingyu Li, Keke Tang, Zhen Wang

科研成果: 期刊稿件文章同行评审

14 引用 (Scopus)

摘要

Deep learning models are easily deceived by adversarial examples, and transferable attacks are crucial because of the inaccessibility of model information. Existing SOTA attack approaches tend to destroy important features of objects to generate adversarial examples. This paper proposes the split grid mask attack (SGMA), which reduces the intensity of model-specific features by split grid mask transformation, effectively highlighting the important features of the input image. Perturbing these important features can guide the development of adversarial examples in a more transferable direction. Specifically, we introduce the split grid mask transformation into the input image. Due to the vulnerability of model-specific features to image transformations, the intensity of model-specific features decreases after aggregation while the intensities of important features remain. The generated adversarial examples guided by destroying important features have excellent transferability. Extensive experimental results demonstrate the effectiveness of the proposed SGMA. Compared to the SOTA attack approaches, our method improves the black-box attack success rates by an average of 6.4% and 8.2% against the normally trained models and the defense ones respectively.

源语言英语
页(从-至)6051-6063
页数13
期刊Complex and Intelligent Systems
9
5
DOI
出版状态已出版 - 10月 2023

指纹

探究 'SGMA: a novel adversarial attack approach with improved transferability' 的科研主题。它们共同构成独一无二的指纹。

引用此