TY - JOUR
T1 - CSFAdv
T2 - Critical Semantic Fusion Guided Least-Effort Adversarial Example Attacks
AU - Peng, Da Tian
AU - Dong, Jianmin
AU - Zhang, Mingjiang
AU - Yang, Jungang
AU - Wang, Zhen
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Extensive studies have revealed that the prevalent deep neural networks (DNNs) are vulnerable to adversarial examples in image recognition tasks. However, previous adversarial example attacks always work in either the global semantic space or local semantic attributes, resulting that these attacks may violate the sophisticated attackers' least-effort intentions, whereas adversarial perturbations due to the explicit semantic variations are probably perceived by human vision. In this paper, we propose a two-phase optimization modeling framework to devise a novel Critical Semantic Fusion guided least-effort Adversarial example attack (CSFAdv). Specifically, the first phase fuses the coarse-&fine-grained semantic maps to localize the latent critical semantic attention region (CSAR) from genuine image. Under the friendly guidance of CSAR-feasibility, the second phase absorbs the ReLU-penalization, {mathcal {L}}-{0} -regularization and {mathcal {L}}-{infty } -limitation to formulate a Top-1&Top-2 misclassification optimization problem, which can characterize the holistic least-effort tampering behaviors embodied in localizing the most critical semantic space, doctoring the least amounts of pixels, injecting the limited amplitudes of perturbations and launching the most readily adversarial attacks. Further, to solve this NP-hard problem mildly, we adapt the gradient renewal by means of merging the momentum (past gradient), present gradient and Hessian (future gradient) to formalize a generalized gradient descent algorithm for generating an optimal adversarial image. Finally, we perform numerical experiments to verify the validity of our CSFAdv against seven types of DNN-based image classifiers on three public ImageNet, MNIST and CIFAR10. Empirical illustrations from ten evaluations indices shed light on the superiority of CSFAdv over eight kinds of state-of-the-art attacks and also offer key clues in reinforcing the DNNs' robustness.
AB - Extensive studies have revealed that the prevalent deep neural networks (DNNs) are vulnerable to adversarial examples in image recognition tasks. However, previous adversarial example attacks always work in either the global semantic space or local semantic attributes, resulting that these attacks may violate the sophisticated attackers' least-effort intentions, whereas adversarial perturbations due to the explicit semantic variations are probably perceived by human vision. In this paper, we propose a two-phase optimization modeling framework to devise a novel Critical Semantic Fusion guided least-effort Adversarial example attack (CSFAdv). Specifically, the first phase fuses the coarse-&fine-grained semantic maps to localize the latent critical semantic attention region (CSAR) from genuine image. Under the friendly guidance of CSAR-feasibility, the second phase absorbs the ReLU-penalization, {mathcal {L}}-{0} -regularization and {mathcal {L}}-{infty } -limitation to formulate a Top-1&Top-2 misclassification optimization problem, which can characterize the holistic least-effort tampering behaviors embodied in localizing the most critical semantic space, doctoring the least amounts of pixels, injecting the limited amplitudes of perturbations and launching the most readily adversarial attacks. Further, to solve this NP-hard problem mildly, we adapt the gradient renewal by means of merging the momentum (past gradient), present gradient and Hessian (future gradient) to formalize a generalized gradient descent algorithm for generating an optimal adversarial image. Finally, we perform numerical experiments to verify the validity of our CSFAdv against seven types of DNN-based image classifiers on three public ImageNet, MNIST and CIFAR10. Empirical illustrations from ten evaluations indices shed light on the superiority of CSFAdv over eight kinds of state-of-the-art attacks and also offer key clues in reinforcing the DNNs' robustness.
KW - Adversarial example attacks
KW - deep neural networks
KW - image recognition
KW - semantic localization
KW - vulnerability
UR - http://www.scopus.com/inward/record.url?scp=85193483345&partnerID=8YFLogxK
U2 - 10.1109/TIFS.2024.3402385
DO - 10.1109/TIFS.2024.3402385
M3 - 文章
AN - SCOPUS:85193483345
SN - 1556-6013
VL - 19
SP - 5940
EP - 5955
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -