AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning

arXiv cs.CL / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Agentic Graph Learning (AGL), reframing how LLM-based agents should learn and reason over real-world graph data by explicitly leveraging graph topology rather than treating external information as unstructured text.
  • It introduces AgentGL, described as the first reinforcement-learning-driven framework for AGL, combining graph-native exploration tools with an LLM agent for topology-aware navigation and inference.
  • AgentGL uses “search-constrained thinking” to regulate tool usage, aiming to balance accuracy with efficiency while executing multi-step, long-horizon decision-making.
  • The approach employs a graph-conditioned curriculum RL strategy to stabilize long-horizon policy learning without relying on step-wise supervision.
  • Experiments on Text-Attributed Graph (TAG) benchmarks across multiple LLM backbones show substantial gains—up to 17.5% absolute for node classification and 28.4% for link prediction—along with publicly released code.

Abstract

Large Language Models (LLMs) increasingly rely on agentic capabilities-iterative retrieval, tool use, and decision-making-to overcome the limits of static, parametric knowledge. Yet existing agentic frameworks treat external information as unstructured text and fail to leverage the topological dependencies inherent in real-world data. To bridge this gap, we introduce Agentic Graph Learning (AGL), a paradigm that reframes graph learning as an interleaved process of topology-aware navigation and LLM-based inference. Specifically, we propose AgentGL, the first reinforcement learning (RL)-driven framework for AGL. AgentGL equips an LLM agent with graph-native tools for multi-scale exploration, regulates tool usage via search-constrained thinking to balance accuracy and efficiency, and employs a graph-conditioned curriculum RL strategy to stabilize long-horizon policy learning without step-wise supervision. Across diverse Text-Attributed Graph (TAG) benchmarks and multiple LLM backbones, AgentGL substantially outperforms strong GraphLLMs and GraphRAG baselines, achieving absolute improvements of up to 17.5% in node classification and 28.4% in link prediction. These results demonstrate that AGL is a promising frontier for enabling LLMs to autonomously navigate and reason over complex relational environments. The code is publicly available at https://github.com/sunyuanfu/AgentGL.