A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning

Yinbo Yu, Jiajia Liu, Shouqing Li, Kepu Huang, Xudong Feng

Research output: Contribution to journalConference articlepeer-review

7 Scopus citations

Abstract

Deep reinforcement learning (DRL) has made sig-nificant achievements in many real-world applications. But these real-world applications typically can only provide partial ob-servations for making decisions due to occlusions and noisy sensors. However, partial state observability can be used to hide malicious behaviors for backdoors. In this paper, we explore the sequential nature of DRL and propose a novel temporal-pattern backdoor attack to DRL, whose trigger is a set of temporal constraints on a sequence of observations rather than a single observation, and effect can be kept in a controllable duration rather than in the instant. We validate our proposed backdoor attack to a typical job scheduling task in cloud computing. Numerous experimental results show that our backdoor can achieve excellent effectiveness, stealthiness, and sustainability. Our backdoor's average clean data accuracy and attack success rate can reach 97.8% and 97.5%, respectively.

Original languageEnglish
Pages (from-to)2710-2715
Number of pages6
JournalProceedings - IEEE Global Communications Conference, GLOBECOM
DOIs
StatePublished - 2022
Event2022 IEEE Global Communications Conference, GLOBECOM 2022 - Rio de Janeiro, Brazil
Duration: 4 Dec 20228 Dec 2022

Keywords

  • Backdoor attack
  • deep reinforcement learning
  • temporal feature

Fingerprint

Dive into the research topics of 'A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this