TY - GEN
T1 - Logic-Aware Knowledge Graph Reasoning for Structural Sparsity under Large Language Model Supervision
AU - Pan, Yudai
AU - Hong, Jiajie
AU - Zhao, Tianzhe
AU - Song, Lingyun
AU - Liu, Jun
AU - Shang, Xuequn
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/4/28
Y1 - 2025/4/28
N2 - Knowledge Graph (KG) reasoning aims to predict missing entities in incomplete triples, which requires adequate structural information to derive accurate embeddings. However, KGs in the real world are not as dense as the idealized benchmarks, where sparse graph structures restrict the comprehensive structural information for superior performance. Although the logical semantics in KGs shows its potential in alleviating the impact of structural sparsity, there still exist some challenges. The deficient supervision and the semantic gap of logic make it difficult to introduce logical semantics in sparse KG reasoning. To this end, we propose a novel KG reasoning approach LoLLM injecting logic with the supervised information supplied by the Large Language Model (LLM), which is proved to be effective in evaluating and scoring. Firstly, LoLLM derives structural embeddings employing a graph convolutional network (GCN) with relation-aware and triple-aware attention. LoLLM secondly constructs reasoning paths instantiated from the first-order logic rules extracted from sparse KGs, and injects the logical semantics by a designed LLM-enhanced tuning strategy. We propose a textual loss (TL) and a logical loss (LL) in the optimization and obtain logical tuning embeddings of KG in this process. Finally, LoLLM fuses structural embeddings from the GCN and logical tuning embeddings from the LLM-enhanced tuning for scoring and incomplete triple prediction. Extensive experiments on two sparse KGs and a benchmark show that LoLLM outperforms state-of-the-art structure-based and Language Model (LM)-augmented baselines. Moreover, the logic rules with corresponding confidences provide explicit explanations as an interpretable paradigm.
AB - Knowledge Graph (KG) reasoning aims to predict missing entities in incomplete triples, which requires adequate structural information to derive accurate embeddings. However, KGs in the real world are not as dense as the idealized benchmarks, where sparse graph structures restrict the comprehensive structural information for superior performance. Although the logical semantics in KGs shows its potential in alleviating the impact of structural sparsity, there still exist some challenges. The deficient supervision and the semantic gap of logic make it difficult to introduce logical semantics in sparse KG reasoning. To this end, we propose a novel KG reasoning approach LoLLM injecting logic with the supervised information supplied by the Large Language Model (LLM), which is proved to be effective in evaluating and scoring. Firstly, LoLLM derives structural embeddings employing a graph convolutional network (GCN) with relation-aware and triple-aware attention. LoLLM secondly constructs reasoning paths instantiated from the first-order logic rules extracted from sparse KGs, and injects the logical semantics by a designed LLM-enhanced tuning strategy. We propose a textual loss (TL) and a logical loss (LL) in the optimization and obtain logical tuning embeddings of KG in this process. Finally, LoLLM fuses structural embeddings from the GCN and logical tuning embeddings from the LLM-enhanced tuning for scoring and incomplete triple prediction. Extensive experiments on two sparse KGs and a benchmark show that LoLLM outperforms state-of-the-art structure-based and Language Model (LM)-augmented baselines. Moreover, the logic rules with corresponding confidences provide explicit explanations as an interpretable paradigm.
KW - First-order Logic
KW - Knowledge Graph
KW - LLM-enhanced Tuning
KW - Sparse Knowledge Graph Reasoning
UR - http://www.scopus.com/inward/record.url?scp=105005138779&partnerID=8YFLogxK
U2 - 10.1145/3696410.3714685
DO - 10.1145/3696410.3714685
M3 - 会议稿件
AN - SCOPUS:105005138779
T3 - WWW 2025 - Proceedings of the ACM Web Conference
SP - 4531
EP - 4542
BT - WWW 2025 - Proceedings of the ACM Web Conference
PB - Association for Computing Machinery, Inc
T2 - 34th ACM Web Conference, WWW 2025
Y2 - 28 April 2025 through 2 May 2025
ER -