Position: How can Graphs Help Large Language Models?

arXiv cs.AI / 5/5/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper explores the complementary relationship by asking how graphs can help large language models (LLMs), rather than focusing only on how LLMs improve graph learning.
  • It argues that graphs can serve as up-to-date knowledge sources to reduce LLM hallucinations.
  • It surveys graph-based prompting approaches—such as Chain-of-Thought (CoT), Tree-of-Thought (ToT), and Graph-of-Thought (GoT)—that can strengthen LLM reasoning.
  • It notes that integrating graphs into LLMs improves handling of structured data, enabling wider use in areas like e-commerce, code, and relational databases (RDBs).
  • It outlines future directions including graph-inspired sparse LLM architectures and brain-like memory systems driven by graphs.

Abstract

With the rapid advancement of large language models (LLMs), classic graph learning tasks have greatly benefited from LLMs, including improved encoding of textual features, more efficient construction of graphs from text, and enhanced reasoning over knowledge graphs. In this paper, we ask a complementary question: How can graphs help LLMs? We address this question from three perspectives: 1) graphs provide an up-to-date knowledge source that helps reduce LLM hallucinations, 2) graph-based prompting techniques-such as Chain-of-Thought (CoT), Tree-of-Thought (ToT), and Graph-of-Thought (GoT)-enhance LLM reasoning capabilities, and 3) integrating graphs into LLMs improves their understanding of structured data, expanding their applicability to domains such as e-commerce, code, and relational databases (RDBs). We further outlook some future directions including designing sparse LLM architectures based on graphs and brain-inspired memory systems.

Position: How can Graphs Help Large Language Models? | AI Navigate