Exploring Pass-Rate Reward in Reinforcement Learning for Code Generation

arXiv cs.LG / 5/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies using pass-rate (test-case completion rate) as a surrogate reward in critic-free reinforcement learning for code generation, where binary “pass all tests” rewards are too sparse.
  • Across multiple base models and critic-free RL algorithms (e.g., GRPO and RLOO), the authors find that pass-rate rewards do not consistently improve final code-generation performance compared with binary rewards in controlled experiments.
  • Although pass-rate rewards are denser and provide more frequent learning signals, the resulting gradient updates often fail to shift probability mass toward fully correct solutions.
  • The study attributes this to pass-rate being a miscalibrated proxy for full correctness, where partially passing solutions within the same group can create conflicting gradient directions that cancel out.
  • The findings suggest that, in critic-free RL, pass-rate rewards alone are insufficient, and that future reward designs should better align optimization objectives with full correctness.

Abstract

Reinforcement learning (RL) from unit-test feedback has become a standard post-training recipe for improving large language models (LLMs) on code generation. However, the pass-all-tests binary reward can be sparse, yielding no learning signal on challenging problems where none of the sampled solutions passes all tests. A common remedy is to use the test-case pass rate as a surrogate reward. In this work, we study pass-rate rewards in critic-free RL for code generation (e.g., GRPO and RLOO) and report a consistent pattern across base models and algorithms: despite alleviating reward sparsity, pass-rate rewards do not reliably improve final performance over binary rewards in rigorous controlled experiments. To understand this discrepancy, we analyze reward density and the resulting gradient directions. We find that pass-rate rewards are denser, but the induced gradient updates do not consistently move probability mass toward full-pass solutions. This arises because test-case pass rate is a miscalibrated surrogate for progress toward full correctness, and partial-pass solutions within the same group can induce conflicting gradient directions that cancel out. Overall, our results suggest that, in critic-free RL, pass-rate rewards are insufficient to improve code generation and motivate reward designs that better align optimization with the goal of full correctness.