TY - JOUR
T1 - NHGMI
T2 - Heterogeneous graph multi-view infomax with node-wise contrasting samples selection
AU - Li, Qing
AU - Ni, Hang
AU - Wang, Yuanchun
N1 - Publisher Copyright:
© 2024 Elsevier B.V.
PY - 2024/4/8
Y1 - 2024/4/8
N2 - Heterogeneous Graph Neural Networks (HGNNs) have consistently demonstrated exceptional performance in the context of Heterogeneous Information Networks (HINs). However, the majority of existing methods are constrained by their applicability to semi-supervised learning scenarios, which entail substantial labor and resources. Recently, the advent of contrastive learning has catalyzed the development of self-supervised HGNN approaches. In this study, we introduce NHGMI, a novel unsupervised method for heterogeneous graph representation learning, rooted in the principles of mutual information maximization. NHGMI comprehensively harnesses the inherent characteristics of HINs and adeptly amalgamates diverse semantic information emanating from multiple contrasting perspectives. In addition to inter-view contrasts, NHGMI also incorporates intra-view contrasts. Diverging from conventional contrastive learning approaches, NHGMI meticulously selects contrasting samples based on similarity metrics, thereby achieving a noise-free contrastive learning paradigm. Furthermore, we propose an extension of NHGMI that focuses on the generation of negative samples. We conduct extensive experiments across various real-world datasets, and NHGMI consistently exhibits marked superiority when compared to relevant methods, even surpassing semi-supervised approaches. The source code for NHGMI is publicly available at https://github.com/Frederick-the-Fox/NHGMI.
AB - Heterogeneous Graph Neural Networks (HGNNs) have consistently demonstrated exceptional performance in the context of Heterogeneous Information Networks (HINs). However, the majority of existing methods are constrained by their applicability to semi-supervised learning scenarios, which entail substantial labor and resources. Recently, the advent of contrastive learning has catalyzed the development of self-supervised HGNN approaches. In this study, we introduce NHGMI, a novel unsupervised method for heterogeneous graph representation learning, rooted in the principles of mutual information maximization. NHGMI comprehensively harnesses the inherent characteristics of HINs and adeptly amalgamates diverse semantic information emanating from multiple contrasting perspectives. In addition to inter-view contrasts, NHGMI also incorporates intra-view contrasts. Diverging from conventional contrastive learning approaches, NHGMI meticulously selects contrasting samples based on similarity metrics, thereby achieving a noise-free contrastive learning paradigm. Furthermore, we propose an extension of NHGMI that focuses on the generation of negative samples. We conduct extensive experiments across various real-world datasets, and NHGMI consistently exhibits marked superiority when compared to relevant methods, even surpassing semi-supervised approaches. The source code for NHGMI is publicly available at https://github.com/Frederick-the-Fox/NHGMI.
KW - Contrastive learning
KW - Graph neural network
KW - Heterogeneous graph
KW - Infomax
KW - Node-wise contrasting sample
UR - http://www.scopus.com/inward/record.url?scp=85185390875&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2024.111520
DO - 10.1016/j.knosys.2024.111520
M3 - 文章
AN - SCOPUS:85185390875
SN - 0950-7051
VL - 289
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 111520
ER -