TY - GEN
T1 - DBA
T2 - Knowledge Science, Engineering and Management - 16th International Conference, KSEM 2023, Proceedings
AU - Fan, Zepeng
AU - Zhu, Peican
AU - Gao, Chao
AU - Hong, Jinbang
AU - Tang, Keke
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - In practice, deep learning models are easy to be fooled by input images with subtle perturbations, and those images are called adversarial examples. Regarding one model, the crafted adversarial examples can successfully fool other models with varying architectures but the same task, which is referred to as adversarial transferability. Nevertheless, in practice, it is hard to get information about the model to be attacked, transfer-based adversarial attacks have developed rapidly. Later, different techniques are proposed to promote adversarial transferability. Different from existing input transformation attacks based on spatial transformation, our approach is a novel one on the basis of information deletion. By deleting squares of the input images by channels, we mitigate overfitting on the surrogate model of the adversarial examples and further enhance adversarial transferability. The corresponding performance of our method is superior to the existing input transformation attacks on different models (here, we consider unsecured models and defense ones), as demonstrated by extensive evaluations on ImageNet.
AB - In practice, deep learning models are easy to be fooled by input images with subtle perturbations, and those images are called adversarial examples. Regarding one model, the crafted adversarial examples can successfully fool other models with varying architectures but the same task, which is referred to as adversarial transferability. Nevertheless, in practice, it is hard to get information about the model to be attacked, transfer-based adversarial attacks have developed rapidly. Later, different techniques are proposed to promote adversarial transferability. Different from existing input transformation attacks based on spatial transformation, our approach is a novel one on the basis of information deletion. By deleting squares of the input images by channels, we mitigate overfitting on the surrogate model of the adversarial examples and further enhance adversarial transferability. The corresponding performance of our method is superior to the existing input transformation attacks on different models (here, we consider unsecured models and defense ones), as demonstrated by extensive evaluations on ImageNet.
KW - Adversarial examples
KW - Information deletion
KW - Input transformation
KW - Transfer-based adversarial attacks
KW - Transferability
UR - http://www.scopus.com/inward/record.url?scp=85172135808&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-40286-9_23
DO - 10.1007/978-3-031-40286-9_23
M3 - 会议稿件
AN - SCOPUS:85172135808
SN - 9783031402852
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 276
EP - 288
BT - Knowledge Science, Engineering and Management - 16th International Conference, KSEM 2023, Proceedings
A2 - Jin, Zhi
A2 - Jiang, Yuncheng
A2 - Ma, Wenjun
A2 - Buchmann, Robert Andrei
A2 - Ghiran, Ana-Maria
A2 - Bi, Yaxin
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 16 August 2023 through 18 August 2023
ER -