TRN-R1-Zero: Text-rich Network Reasoning via LLMs with Reinforcement Learning Only

arXiv cs.CL / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • TRN-R1-Zero is a new post-training framework for zero-shot reasoning on text-rich networks that combines textual semantics with relational structure without task-specific supervision.
  • The approach directly optimizes base LLMs using a neighbour-aware Group Relative Policy Optimisation objective with a margin-gain reward metric to better encourage relational reasoning.
  • Unlike prior LLM-based methods that may ignore graph context or rely on distillation, TRN-R1-Zero avoids supervised fine-tuning and does not require chain-of-thought data from larger models.
  • Experiments on multiple TRN benchmarks (citation, hyperlink, social, and co-purchase) show improved performance and robustness, and the method enables zero-shot inference for edge- and graph-level tasks based only on node-level training.
  • The accompanying code is released publicly, supporting reproducibility and further experimentation.

Abstract

Zero-shot reasoning on text-rich networks (TRNs) remains a challenging frontier, as models must integrate textual semantics with relational structure without task-specific supervision. While graph neural networks rely on fixed label spaces and supervised objectives, recent large language model (LLM)-based approaches often overlook graph context or depend on distillation from larger models, limiting generalisation. We propose TRN-R1-Zero, a post-training framework for TRN reasoning trained solely via reinforcement learning. TRN-R1-Zero directly optimises base LLMs using a Neighbour-aware Group Relative Policy Optimisation objective that dynamically adjusts rewards based on a novel margin gain metric for the informativeness of neighbouring signals, effectively guiding the model toward relational reasoning. Unlike prior methods, TRN-R1-Zero requires no supervised fine-tuning or chain-of-thought data generated from large reasoning models. Extensive experiments across citation, hyperlink, social and co-purchase TRN benchmarks demonstrate the superiority and robustness of TRN-R1-Zero. Moreover, relying strictly on node-level training, TRN-R1-Zero achieves zero-shot inference on edge- and graph-level tasks, extending beyond cross-domain transfer. The codebase is publicly available at https://github.com/superallen13/TRN-R1-Zero.