RHyVE: Competence-Aware Verification and Phase-Aware Deployment for LLM-Generated Reward Hypotheses

arXiv cs.AI / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM-generated reward functions for reinforcement learning can’t be treated as reliable optimization objectives without considering when they can be verified and deployed during training.
  • It proposes RHyVE, a competence-aware verification and phase-aware deployment protocol that treats generated rewards as hypotheses and uses short-horizon fork verification based on the current policy’s competence.
  • Experiments show reward rankings are unreliable when policy competence is low but become useful after task-dependent competence thresholds are reached.
  • On a sparse manipulation task, phase-aware deployment under a locked protocol improves both peak and retained performance compared with alternatives.
  • Additional experiments indicate there is no universally optimal warm-up schedule, and RHyVE is best seen as a verification-informed deployment approach rather than a one-size-fits-all scheduler.

Abstract

Large language models (LLMs) make reward design in reinforcement learning substantially more scalable, but generated rewards are not automatically reliable training objectives. Existing work has focused primarily on generating, evolving, or selecting reward candidates, while paying less attention to when such candidates can be verified and deployed during policy optimization. We study this deployment-time problem by treating generated rewards as reward hypotheses whose utility depends on the competence of the current policy and the phase of training. We propose \textsc{RHyVE}, a competence-aware verification and phase-aware deployment protocol that compares small sets of reward hypotheses from shared policy checkpoints using short-horizon fork verification. Our experiments show that reward rankings are unreliable at low competence but become informative after task-dependent thresholds. On a sparse manipulation task, phase-aware deployment improves peak and retained performance under a locked protocol. Updated LLM-generated reward-candidate experiments show candidate-family-dependent behavior: generated pools can exhibit phase-dependent winner changes, but no fixed warm-up schedule is universally optimal. Held-out schedule selection, conservative selector baselines, compute-matched controls, and scale controls further show that \textsc{RHyVE} is best understood as a verification-informed deployment protocol rather than a universal scheduler. Dense and all-failure boundary experiments delimit the scope of the method. Together, these results suggest that reward generation and reward deployment should be studied as coupled problems: generated rewards must be verified and deployed under changing policy competence.