TY - JOUR
T1 - 面向深度模型的对抗攻击与对抗防御技术综述
AU - Wang, Wenxuan
AU - Wang, Chenglei
AU - Qi, Huihui
AU - Ye, Menghao
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 2025 Editorial Board of Journal of Signal Processing. All rights reserved.
PY - 2025/2
Y1 - 2025/2
N2 - Deep learning techniques have been widely applied in core tasks of computer vision,such as image classification and object detection,achieving remarkable progress. However,owing to the complexity and inherent uncertainty of deep learning models,they are highly vulnerable to adversarial attacks. In these attacks,attackers subtly manipulate data by adding carefully designed perturbations that cause the model to make incorrect predictions with high confidence. Such adversarial examples pose significant challenges and potential threats to the reliability and security of models in real-world applications. For example,attackers can use adversarial glasses to mislead facial recognition systems,causing identity misclassification,which could lead to illegal access or identity fraud,threatening public safety and personal privacy. Similarly,adversarial noise added to the monitoring data of autonomous driving systems,while not altering the characteristics of vehicles,may cause the system to miss detecting important vehicles,leading to traffic disruptions or even accidents with severe consequences. This paper reviews the current research on adversarial attacks and defense techniques. Specifically,it covers the following three aspects:1)It introduces the basic concepts and classifications of adversarial examples,analyzes various forms and strategies of adversarial attacks,and provides examples of classic adversarial example generation methods. 2)It describes the defense methods against adversarial examples,systematically categorizing algorithms that enhance model robustness from three directions,namely,model optimization,data optimization,and additional network structures. The innovation and effectiveness of each defense method are discussed. 3)It presents application cases of adversarial attacks and defenses,expounding on the development status of adversarial attack and defense in the era of large model and analyzing the challenges encountered in real-world applications and possible solutions. Finally,the paper summarizes and analyzes the current state of adversarial attack and defense methods and offers insights into future research directions in this domain.
AB - Deep learning techniques have been widely applied in core tasks of computer vision,such as image classification and object detection,achieving remarkable progress. However,owing to the complexity and inherent uncertainty of deep learning models,they are highly vulnerable to adversarial attacks. In these attacks,attackers subtly manipulate data by adding carefully designed perturbations that cause the model to make incorrect predictions with high confidence. Such adversarial examples pose significant challenges and potential threats to the reliability and security of models in real-world applications. For example,attackers can use adversarial glasses to mislead facial recognition systems,causing identity misclassification,which could lead to illegal access or identity fraud,threatening public safety and personal privacy. Similarly,adversarial noise added to the monitoring data of autonomous driving systems,while not altering the characteristics of vehicles,may cause the system to miss detecting important vehicles,leading to traffic disruptions or even accidents with severe consequences. This paper reviews the current research on adversarial attacks and defense techniques. Specifically,it covers the following three aspects:1)It introduces the basic concepts and classifications of adversarial examples,analyzes various forms and strategies of adversarial attacks,and provides examples of classic adversarial example generation methods. 2)It describes the defense methods against adversarial examples,systematically categorizing algorithms that enhance model robustness from three directions,namely,model optimization,data optimization,and additional network structures. The innovation and effectiveness of each defense method are discussed. 3)It presents application cases of adversarial attacks and defenses,expounding on the development status of adversarial attack and defense in the era of large model and analyzing the challenges encountered in real-world applications and possible solutions. Finally,the paper summarizes and analyzes the current state of adversarial attack and defense methods and offers insights into future research directions in this domain.
KW - adversarial attack
KW - adversarial defense
KW - computer vision
KW - deep learning
KW - trusty artificial intelligence
UR - http://www.scopus.com/inward/record.url?scp=85219024418&partnerID=8YFLogxK
U2 - 10.12466/xhcl.2025.02.002
DO - 10.12466/xhcl.2025.02.002
M3 - 文章
AN - SCOPUS:85219024418
SN - 1003-0530
VL - 41
SP - 198
EP - 223
JO - Journal of Signal Processing
JF - Journal of Signal Processing
IS - 2
ER -