TY - JOUR
T1 - Vulnerable point detection and repair against adversarial attacks for convolutional neural networks
AU - Gao, Jie
AU - Xia, Zhaoqiang
AU - Dai, Jing
AU - Dang, Chen
AU - Jiang, Xiaoyue
AU - Feng, Xiaoyi
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
PY - 2023/12
Y1 - 2023/12
N2 - Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial examples raises questions about the security of smart applications. Some works have paid attention to this problem and proposed several defensive strategies to resist adversarial attacks. However, no one explored the vulnerable area of the model under multiple attacks. In this work, we fill this gap by exploring the vulnerable areas of the model, which is vulnerable to adversarial attacks. Specifically, under various attack methods with different strengths, we conduct extensive experiments on two datasets based on three different networks and illustrate some phenomena. Besides, by exploiting the Siamese Network, we propose a novel approach to more intuitively discover the deficiencies of the model. Moreover, we further provide a novel adaptive vulnerable point repair method to improve the adversarial robustness of the model. Extensive experimental results show that our proposed method can effectively improve the adversarial robustness of the model.
AB - Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial examples raises questions about the security of smart applications. Some works have paid attention to this problem and proposed several defensive strategies to resist adversarial attacks. However, no one explored the vulnerable area of the model under multiple attacks. In this work, we fill this gap by exploring the vulnerable areas of the model, which is vulnerable to adversarial attacks. Specifically, under various attack methods with different strengths, we conduct extensive experiments on two datasets based on three different networks and illustrate some phenomena. Besides, by exploiting the Siamese Network, we propose a novel approach to more intuitively discover the deficiencies of the model. Moreover, we further provide a novel adaptive vulnerable point repair method to improve the adversarial robustness of the model. Extensive experimental results show that our proposed method can effectively improve the adversarial robustness of the model.
KW - Adaptive model repair
KW - Adversarial attacks
KW - Adversarial robustness
KW - Vulnerable point detect
UR - http://www.scopus.com/inward/record.url?scp=85163603165&partnerID=8YFLogxK
U2 - 10.1007/s13042-023-01888-5
DO - 10.1007/s13042-023-01888-5
M3 - 文章
AN - SCOPUS:85163603165
SN - 1868-8071
VL - 14
SP - 4163
EP - 4192
JO - International Journal of Machine Learning and Cybernetics
JF - International Journal of Machine Learning and Cybernetics
IS - 12
ER -