LogosKG: Hardware-Optimized Scalable and Interpretable Knowledge Graph Retrieval

arXiv cs.CL / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces LogosKG, a hardware-aligned framework for scalable and interpretable k-hop (multi-hop) knowledge graph retrieval integrated with LLM-based systems.
  • LogosKG improves efficiency by decomposing subject, object, and relation representations and executing KG traversal as hardware-efficient operations derived from symbolic KG formulations.
  • To handle very large graphs (up to billion-edge scale), it adds degree-aware partitioning, cross-graph routing, and on-demand caching.
  • Experiments report substantial efficiency gains versus CPU/GPU baselines while maintaining retrieval fidelity.
  • The authors also show that a downstream two-round KG-LLM interaction benefits from LogosKG, enabling evidence-grounded biomedical analysis tied to KG topology and its effect on LLM diagnostic reasoning.

Abstract

Knowledge graphs (KGs) are increasingly integrated with large language models (LLMs) to provide structured, verifiable reasoning. A core operation in this integration is multi-hop retrieval, yet existing systems struggle to balance efficiency, scalability, and interpretability. We introduce LogosKG, a novel, hardware-aligned framework that enables scalable and interpretable k-hop retrieval on large KGs by building on symbolic KG formulations and executing traversal as hardware-efficient operations over decomposed subject, object, and relation representations. To scale to billion-edge graphs, LogosKG integrates degree-aware partitioning, cross-graph routing, and on-demand caching. Experiments show substantial efficiency gains over CPU and GPU baselines without loss of retrieval fidelity. With proven performance in KG retrieval, a downstream two-round KG-LLM interaction demonstrates how LogosKG enables large-scale, evidence-grounded analysis of how KG topology, such as hop distribution and connectivity, shapes the alignment between structured biomedical knowledge and LLM diagnostic reasoning, thereby opening the door for next-generation KG-LLM integration. The source code is publicly available at https://github.com/LARK-NLP-Lab/LogosKG, and an online demo is available at https://lark-nlp-lab-logoskg.hf.space/.