RefineRL: Advancing Competitive Programming with Self-Refinement Reinforcement Learning

arXiv cs.AI / 4/2/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces RefineRL, aiming to improve LLM performance in competitive programming by leveraging iterative self-refinement rather than single-attempt solution generation.
  • RefineRL’s Skeptical-Agent uses local execution/validation against public test cases while maintaining a skeptical stance toward its own outputs to drive more rigorous refinement.
  • It also proposes an RL-based training method that encourages self-refinement using only standard RLVR data (problems with verifiable answers), avoiding the need for specialized extra supervision.
  • Experiments on Qwen3-4B and Qwen3-4B-2507 show that RL-trained 4B models with the Skeptical-Agent outperform much larger 32B models and come close to the single-attempt performance of 235B models, indicating strong scaling potential for refinement-based reasoning.

Abstract

While large language models (LLMs) have demonstrated strong performance on complex reasoning tasks such as competitive programming (CP), existing methods predominantly focus on single-attempt settings, overlooking their capacity for iterative refinement. In this paper, we present RefineRL, a novel approach designed to unleash the self-refinement capabilities of LLMs for CP problem solving. RefineRL introduces two key innovations: (1) Skeptical-Agent, an iterative self-refinement agent equipped with local execution tools to validate generated solutions against public test cases of CP problems. This agent always maintains a skeptical attitude towards its own outputs and thereby enforces rigorous self-refinement even when validation suggests correctness. (2) A reinforcement learning (RL) solution to incentivize LLMs to self-refine with only standard RLVR data (i.e., problems paired with their verifiable answers). Extensive experiments on Qwen3-4B and Qwen3-4B-2507 demonstrate that our method yields substantial gains: after our RL training, these compact 4B models integrated with the Skeptical-Agent not only outperform much larger 32B models but also approach the single-attempt performance of 235B models. These findings suggest that self-refinement holds considerable promise for scaling LLM reasoning, with significant potential for further advancement.