Topology-Aware Reasoning over Incomplete Knowledge Graph with Graph-Based Soft Prompting
arXiv cs.CL / 4/15/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses hallucinations in knowledge-intensive QA by grounding LLM generation in incomplete Knowledge Graphs (KGs) rather than relying on brittle explicit edge traversal.
- It introduces a graph-based soft prompting method where a Graph Neural Network (GNN) encodes structural subgraphs into soft prompts, enabling subgraph-level reasoning for better entity/relation relevance when edges are missing.
- A two-stage framework is proposed to cut computation: a lightweight LLM first uses the soft prompts to select relevant entities and relations, then a stronger LLM performs evidence-aware answer generation.
- Experiments on four multi-hop KBQA benchmarks report state-of-the-art results on three benchmarks, and the authors provide code via a public GitHub repository.




