Enhanced LLM Reasoning by Optimizing Reward Functions with Search-Driven Reinforcement Learning

arXiv cs.CL / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes a search-driven reinforcement learning framework that optimizes not just an LLM’s policy, but the reward function specification itself to improve mathematical reasoning performance.
  • Using a fixed base model (Llama-3.2-3B-Instruct) with LoRA, the method generates candidate reward functions via a frontier language model, automatically validates them, and then screens them through 500-step GRPO training runs ranked by GSM8K F1.
  • Over five iterative rounds, it produces 50 candidate rewards and improves mean GSM8K F1 from 0.596 (Round 1) to 0.632 (Round 5), with the best single reward reaching F1 = 0.787.
  • Evaluating ensembles of top-ranked rewards shows the best ensemble achieves F1 = 0.795 and accuracy 0.660, delivering a +0.19 absolute F1 gain over a baseline using base rewards only.
  • Control experiments and statistical testing (McNemar with Bonferroni correction) indicate the performance gains come from the ranked-feedback loop rather than merely adding more reward signals.

Abstract

Mathematical reasoning is a key benchmark for large language models. Reinforcement learning is a standard post-training mechanism for improving the reasoning capabilities of large language models, yet performance remains sensitive to the design of the reward function that drives policy optimization. This paper introduces a search-driven framework that treats the reward specification itself as an object of optimization. The setting of interest is one in which the base model is held fixed and the reward specification is the primary remaining design lever. Candidate reward functions are generated by a frontier language model, validated automatically, screened through 500-step Group Relative Policy Optimization (GRPO) training runs on a Llama-3.2-3B-Instruct base model with Low-Rank Adaptation (LoRA), and ranked by F1 on the GSM8K test set. Ranked summaries from prior rounds are then fed back into the next round of generation. Over five rounds, the search produces 50 candidate rewards. The mean F1 rises from 0.596 in Round 1 to 0.632 in Round 5, and the top individual reward reaches F1 = 0.787. Seven ensemble configurations of top-ranked rewards are evaluated. The best ensemble achieves F1 = 0.795 (95% bootstrap CI [0.756, 0.832]) and accuracy 0.660 [0.635, 0.686], a 0.19 absolute F1 gain over a base-rewards-only GRPO baseline (F1 = 0.609). Pairwise McNemar tests with Bonferroni correction show all five-or-more-reward configurations are statistically indistinguishable at {\alpha} = 0.05/21. A three-seed re-training of the best ensemble yields F1 of 0.785. A randomly drawn 5-reward control collapses to F1 = 0.047, which shows that the ranked-feedback loop, not the additive signal of having more rewards, drives the gain.