PGN: A Perturbation Generation Network Against Deep Reinforcement Learning

Xiangjuan Li, Feifan Li, Yang Li, Quan Pan

科研成果: 书/报告/会议事项章节会议稿件同行评审

2 引用 (Scopus)

摘要

Deep reinforcement learning has advanced greatly and applied in many areas. In this paper, we explore the vulnerability of deep reinforcement learning by proposing a novel generative model for creating effective adversarial examples to attack the agent. Our proposed model can achieve both targeted attacks and untargeted attacks. Considering the specificity of deep reinforcement learning, we propose the action consistency ratio as a measure of stealthiness, and a new measurement index of effectiveness and stealthiness. Experiment results show that our method can ensure the effectiveness and stealthiness of attack compared with other algorithms. Moreover, our methods are considerably faster and thus can achieve rapid and efficient verification of the vulnerability of deep reinforcement learning.

源语言英语
主期刊名Proceedings - 2023 IEEE 35th International Conference on Tools with Artificial Intelligence, ICTAI 2023
出版商IEEE Computer Society
611-618
页数8
ISBN(电子版)9798350342734
DOI
出版状态已出版 - 2023
活动35th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2023 - Atlanta, 美国
期限: 6 11月 20238 11月 2023

出版系列

姓名Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI
ISSN(印刷版)1082-3409

会议

会议35th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2023
国家/地区美国
Atlanta
时期6/11/238/11/23

指纹

探究 'PGN: A Perturbation Generation Network Against Deep Reinforcement Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此