AgentV-RL: Scaling Reward Modeling with Agentic Verifier

arXiv cs.CL / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Agentic Verifier,” a framework that improves reward modeling by running a multi-turn, tool-augmented deliberation process rather than relying solely on test-time scaling with verifiers.
  • It uses complementary forward and backward agents to trace reasoning from premises to conclusions and then re-check conclusions against the premises, aiming to reduce false positives caused by faulty intermediate steps.
  • The approach addresses reliability issues in computation- or knowledge-intensive domains by adding external grounding through tool use during verification.
  • The paper introduces “AgentV-RL” for practical deployment, where an autonomous verifier interleaves tool use with internal reasoning via proactive exploration and reinforcement learning.
  • Experiments report consistent gains for both parallel and sequential test-time scaling, with a 4B variant outperforming state-of-the-art ORMs by 25.2%, suggesting a strong direction for agentic reward modeling.

Abstract

Verifiers have been demonstrated to enhance LLM reasoning via test-time scaling (TTS). Yet, they face significant challenges in complex domains. Error propagation from incorrect intermediate reasoning can lead to false positives for seemingly plausible solutions, while lacking external grounding makes verifiers unreliable on computation or knowledge-intensive tasks. To address these challenges, we propose Agentic Verifier, a framework that transforms reward modeling into a multi-turn, tool-augmented deliberative process. We introduce complementary forward and backward agents: one traces solutions from premises to conclusions, while the other re-checks conclusions against their underlying premises. This bidirectional process enables a comprehensive, reliable, and interpretable assessment of solutions. To facilitate practical deployment, we propose AgentV-RL. Through proactive exploration and reinforcement learning, the verifier autonomously interleaves tool-use with internal reasoning. Extensive experiments show that Agentic Verifier yields consistent performance gains under both parallel and sequential TTS. Notably, our 4B variant surpasses state-of-the-art ORMs by 25.2%, positioning it as a promising paradigm for agentic reward modeling.