TY - JOUR
T1 - SGMA
T2 - a novel adversarial attack approach with improved transferability
AU - Zhu, Peican
AU - Hong, Jinbang
AU - Li, Xingyu
AU - Tang, Keke
AU - Wang, Zhen
N1 - Publisher Copyright:
© 2023, The Author(s).
PY - 2023/10
Y1 - 2023/10
N2 - Deep learning models are easily deceived by adversarial examples, and transferable attacks are crucial because of the inaccessibility of model information. Existing SOTA attack approaches tend to destroy important features of objects to generate adversarial examples. This paper proposes the split grid mask attack (SGMA), which reduces the intensity of model-specific features by split grid mask transformation, effectively highlighting the important features of the input image. Perturbing these important features can guide the development of adversarial examples in a more transferable direction. Specifically, we introduce the split grid mask transformation into the input image. Due to the vulnerability of model-specific features to image transformations, the intensity of model-specific features decreases after aggregation while the intensities of important features remain. The generated adversarial examples guided by destroying important features have excellent transferability. Extensive experimental results demonstrate the effectiveness of the proposed SGMA. Compared to the SOTA attack approaches, our method improves the black-box attack success rates by an average of 6.4% and 8.2% against the normally trained models and the defense ones respectively.
AB - Deep learning models are easily deceived by adversarial examples, and transferable attacks are crucial because of the inaccessibility of model information. Existing SOTA attack approaches tend to destroy important features of objects to generate adversarial examples. This paper proposes the split grid mask attack (SGMA), which reduces the intensity of model-specific features by split grid mask transformation, effectively highlighting the important features of the input image. Perturbing these important features can guide the development of adversarial examples in a more transferable direction. Specifically, we introduce the split grid mask transformation into the input image. Due to the vulnerability of model-specific features to image transformations, the intensity of model-specific features decreases after aggregation while the intensities of important features remain. The generated adversarial examples guided by destroying important features have excellent transferability. Extensive experimental results demonstrate the effectiveness of the proposed SGMA. Compared to the SOTA attack approaches, our method improves the black-box attack success rates by an average of 6.4% and 8.2% against the normally trained models and the defense ones respectively.
KW - Adversarial examples
KW - Deep neural networks
KW - Feature-level attack
KW - Transferable attack
UR - http://www.scopus.com/inward/record.url?scp=85153355590&partnerID=8YFLogxK
U2 - 10.1007/s40747-023-01060-0
DO - 10.1007/s40747-023-01060-0
M3 - 文章
AN - SCOPUS:85153355590
SN - 2199-4536
VL - 9
SP - 6051
EP - 6063
JO - Complex and Intelligent Systems
JF - Complex and Intelligent Systems
IS - 5
ER -