ADMM-Based Adversarial False Data Injection Attacks Against Multi-Label Locational Detection

  • Jiwei Tian
  • , Chao Shen
  • , Chenhao Lin
  • , Meng Zhang
  • , Xiaofang Xia
  • , Chao Ren
  • , Peican Zhu
  • , Chunming Wu
  • , Xiang Chen

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

While multi-label learning has shown excellent performance in False Data Injection Attack (FDIA) locational detection, it has also exposed some potential security risks and vulnerabilities. However, unlike the image domain, the vulnerabilities of multi-label learning in the field of power grid have just received attention and urgently need to be explored and addressed. In this paper, to achieve a better understanding for the security risks of deep learning-based multi-label FDIA detectors, we propose two Alternating Direction Method of Multipliers (ADMM) based adversarial attacks, which are applicable to two different scenarios. The proposed two ADMM-based attacks aim to reduce additional attack costs while seeking suitable adversarial perturbations, making the attacks more realistic and feasible. The experimental results verify the effectiveness of the proposed ADMM-based attacks, making noteworthy strides in fostering a profound comprehension of the vulnerabilities in the unique field of deep multi-label learning for power systems.

Original languageEnglish
Pages (from-to)263-277
Number of pages15
JournalIEEE Transactions on Dependable and Secure Computing
Volume23
Issue number1
DOIs
StatePublished - 2026

Keywords

  • ADMM
  • Adversarial example
  • adversarial machine learning
  • bad data detection
  • deep learning
  • false data injection
  • multi-label learning
  • power system state estimation

Fingerprint

Dive into the research topics of 'ADMM-Based Adversarial False Data Injection Attacks Against Multi-Label Locational Detection'. Together they form a unique fingerprint.

Cite this