Enhancing LLM-based Search Agents via Contribution Weighted Group Relative Policy Optimization

arXiv cs.LG / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes CW-GRPO (Contribution-Weighted Group Relative Policy Optimization) to improve reinforcement learning for LLM-based search agents by better handling credit assignment across a search trajectory.
  • Instead of relying on unstable process rewards or sparse trajectory-level outcome rewards, CW-GRPO uses an LLM judge to score retrieval utility and reasoning correctness at each search round.
  • These per-round contribution scores are used to rescale outcome-based advantages, enabling finer-grained credit assignment while maintaining training stability.
  • Experiments on multiple knowledge-intensive benchmarks show CW-GRPO outperforms standard GRPO by 5.0% on Qwen3-8B and 6.3% on Qwen3-1.7B, indicating more effective search behaviors.
  • The analysis suggests that successful trajectories tend to concentrate high contributions in particular rounds, offering empirical guidance for understanding what makes search agents succeed.

Abstract

Search agents extend Large Language Models (LLMs) beyond static parametric knowledge by enabling access to up-to-date and long-tail information unavailable during pretraining. While reinforcement learning has been widely adopted for training such agents, existing approaches face key limitations: process supervision often suffers from unstable value estimation, whereas outcome supervision struggles with credit assignment due to sparse, trajectory-level rewards. To bridge this gap, we propose Contribution-Weighted GRPO (CW-GRPO), a framework that integrates process supervision into group relative policy optimization. Instead of directly optimizing process rewards, CW-GRPO employs an LLM judge to assess the retrieval utility and reasoning correctness at each search round, producing per-round contribution scores. These scores are used to rescale outcome-based advantages along the trajectory, enabling fine-grained credit assignment without sacrificing optimization stability. Experiments on multiple knowledge-intensive benchmarks show that CW-GRPO outperforms standard GRPO by 5.0\% on Qwen3-8B and 6.3\% on Qwen3-1.7B, leading to more effective search behaviors. Additional analysis reveals that successful trajectories exhibit concentrated contributions across rounds, providing empirical insight into search agent tasks.