PAGCL: An unsupervised graph poisoned attack for graph contrastive learning model

Qing Li, Ziyue Wang, Zehao Li

Research output: Contribution to journalArticlepeer-review

21 Scopus citations

Abstract

Graph-contrastive learning has aided the development of unsupervised graph representation learning, comparable to supervised models in terms of performance. However, the robustness of the graph contrastive learning model still has a bottleneck problem, most of the current adversarial attacks are supervised, and the acquisition of labels cannot be guaranteed when attacking unsupervised graph contrastive learning models. We propose an unsupervised attack method for graph contrastive learning because the traditional supervised graph adversarial attack method is unsuitable for the attack graph contrastive learning model. It combines the graph inject attack with the poison feature matrix and uses gradients in different contrast views of the poison adjacency matrix. Extensive experiments are conducted on various datasets and our method shows notable superiority among relevant methods, even compared to supervised ones. The code is publicly available at https://github.com/lizehaodashuaibi/paper.

Original languageEnglish
Pages (from-to)240-249
Number of pages10
JournalFuture Generation Computer Systems
Volume149
DOIs
StatePublished - Dec 2023

Keywords

  • Adversarial attack
  • Future-generation natural language processing
  • Graph contrastive learning
  • Graph representation learning

Fingerprint

Dive into the research topics of 'PAGCL: An unsupervised graph poisoned attack for graph contrastive learning model'. Together they form a unique fingerprint.

Cite this