AI Navigate

On Information Self-Locking in Reinforcement Learning for Active Reasoning of LLM agents

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies information self-locking in reinforcement-learning-trained LLM agents during active reasoning, where agents cease asking informative questions and struggle to internalize already-obtained information.
  • It decomposes active reasoning into Action Selection and Belief Tracking, showing that deficiencies in these capabilities limit information exploration during training.
  • The authors describe a feedback loop where insufficient exploration prevents AS and BT improvement, locking the agent in a low-information regime.
  • To address this, they reallocate the learning signal by injecting easy-to-obtain directional critiques to help the agent escape self-locking.
  • Across seven datasets, the approach yields up to 60% improvements in mitigating information self-locking.

Abstract

Reinforcement learning (RL) with outcome-based rewards has achieved significant success in training large language model (LLM) agents for complex reasoning tasks. However, in active reasoning where agents need to strategically ask questions to acquire task-relevant information, we find that LLM agents trained with RL often suffer from information self-locking: the agent ceases to ask informative questions and struggles to internalize already-obtained information. To understand the phenomenon, we decompose active reasoning into two core capabilities: Action Selection (AS), which determines the observation stream through queries, and Belief Tracking (BT), which updates the agent's belief based on collected evidence. We show that deficient AS and BT capabilities will limit the information exploration during RL training. Furthermore, insufficient exploration in turn hinders the improvement of AS and BT, creating a feedback loop that locks the agent in a low-information regime. To resolve the issue, we propose a simple yet effective approach that reallocates the learning signal by injecting easy- to-obtain directional critiques to help the agent escape self-locking. Extensive experiments with 7 datasets show that our approach significantly mitigates the information self-locking, bringing up to 60% improvements.