TY - GEN
T1 - An Optimized Global Disturbance Adversarial Attack Method for Infrared Object Detection
AU - Dai, Jiaxin
AU - Jiang, Wen
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Deep neural network (DNN) has various applications in various fields. It realizes the function of real-time object detection used in transportation, medical treatment, and industry. It is closely related to our daily life. During the epidemic, infrared object detection has also been widely used, but the vulnerability and security of neural networks should be paid attention. To understand the security of object detection networks and empha-size the urgent need to develop robust systems, a Faster-RCNN object detection attack method for infrared images is proposed in this paper. By controlling the gradient update direction and loss function optimization, an adversarial disturbance is customized for each input image so that the object detection network (Faster-RCNN) marks the target error or creates a new wrong target, thus deceiving the object detection network and makes it detect errors and invalidate the task. In this paper, we show how to generate negative infrared images by adding a tiny disturbance to the infrared image to deceive the object detection network at a significant level, making it visually imperceptible and impossible to detect in the network. Experiments are performed on FLIR dataset, and our methods are compared and verified.
AB - Deep neural network (DNN) has various applications in various fields. It realizes the function of real-time object detection used in transportation, medical treatment, and industry. It is closely related to our daily life. During the epidemic, infrared object detection has also been widely used, but the vulnerability and security of neural networks should be paid attention. To understand the security of object detection networks and empha-size the urgent need to develop robust systems, a Faster-RCNN object detection attack method for infrared images is proposed in this paper. By controlling the gradient update direction and loss function optimization, an adversarial disturbance is customized for each input image so that the object detection network (Faster-RCNN) marks the target error or creates a new wrong target, thus deceiving the object detection network and makes it detect errors and invalidate the task. In this paper, we show how to generate negative infrared images by adding a tiny disturbance to the infrared image to deceive the object detection network at a significant level, making it visually imperceptible and impossible to detect in the network. Experiments are performed on FLIR dataset, and our methods are compared and verified.
KW - adversarial attack
KW - global disturbance
KW - Gradient fine-tuning control
KW - Infrared image
KW - loss optimization
KW - object detection
UR - http://www.scopus.com/inward/record.url?scp=85180128033&partnerID=8YFLogxK
U2 - 10.1109/ICUS58632.2023.10318316
DO - 10.1109/ICUS58632.2023.10318316
M3 - 会议稿件
AN - SCOPUS:85180128033
T3 - Proceedings of 2023 IEEE International Conference on Unmanned Systems, ICUS 2023
SP - 841
EP - 846
BT - Proceedings of 2023 IEEE International Conference on Unmanned Systems, ICUS 2023
A2 - Song, Rong
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE International Conference on Unmanned Systems, ICUS 2023
Y2 - 13 October 2023 through 15 October 2023
ER -