PGN: A Perturbation Generation Network Against Deep Reinforcement Learning

Xiangjuan Li, Feifan Li, Yang Li, Quan Pan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Deep reinforcement learning has advanced greatly and applied in many areas. In this paper, we explore the vulnerability of deep reinforcement learning by proposing a novel generative model for creating effective adversarial examples to attack the agent. Our proposed model can achieve both targeted attacks and untargeted attacks. Considering the specificity of deep reinforcement learning, we propose the action consistency ratio as a measure of stealthiness, and a new measurement index of effectiveness and stealthiness. Experiment results show that our method can ensure the effectiveness and stealthiness of attack compared with other algorithms. Moreover, our methods are considerably faster and thus can achieve rapid and efficient verification of the vulnerability of deep reinforcement learning.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE 35th International Conference on Tools with Artificial Intelligence, ICTAI 2023
PublisherIEEE Computer Society
Pages611-618
Number of pages8
ISBN (Electronic)9798350342734
DOIs
StatePublished - 2023
Event35th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2023 - Atlanta, United States
Duration: 6 Nov 20238 Nov 2023

Publication series

NameProceedings - International Conference on Tools with Artificial Intelligence, ICTAI
ISSN (Print)1082-3409

Conference

Conference35th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2023
Country/TerritoryUnited States
CityAtlanta
Period6/11/238/11/23

Keywords

  • adversarial attack
  • Deep reinforcement learning
  • generative network

Fingerprint

Dive into the research topics of 'PGN: A Perturbation Generation Network Against Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this