ReCast: Recasting Learning Signals for Reinforcement Learning in Generative Recommendation

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper shows that generic group-based reinforcement learning assumptions fail for sparse-hit generative recommendation because many sampled rollout groups never become usable learning signals.
  • It introduces ReCast, which repairs groups to ensure minimal learnability even for all-zero groups and then uses a boundary-focused contrastive update rather than full-group reward normalization.
  • ReCast is designed to keep the outer RL framework unchanged by modifying only the within-group learning-signal construction, aiming to improve efficiency while preserving the overall training pipeline.
  • Across multiple generative recommendation tasks, ReCast outperforms OpenOneRec-RL with up to a 36.6% relative Pass@1 improvement, reaching target performance using only 4.1% of the rollout budget.
  • The method also provides system-level efficiency gains, including dramatically reducing actor-side update time (16.60x), lowering peak memory usage (16.5%), and improving actor MFU (14.2%), alongside mechanistic evidence that it alleviates all-zero/single-hit regimes.

Abstract

Generic group-based RL assumes that sampled rollout groups are already usable learning signals. We show that this assumption breaks down in sparse-hit generative recommendation, where many sampled groups never become learnable at all. We propose ReCast, a repair-then-contrast learning-signal framework that first restores minimal learnability for all-zero groups and then replaces full-group reward normalization with a boundary-focused contrastive update on the strongest positive and the hardest negative. ReCast leaves the outer RL framework unchanged, modifies only within-group signal construction, and partially decouples rollout search width from actor-side update width. Across multiple generative recommendation tasks, ReCast consistently outperforms OpenOneRec-RL, achieving up to 36.6% relative improvement in Pass@1. Its matched-budget advantage is substantially larger: ReCast reaches the baseline's target performance with only 4.1% of the rollout budget, and this advantage widens with model scale. The same design also yields direct system-level gains, reducing actor-side update time by 16.60x, lowering peak allocated memory by 16.5%, and improving actor MFU by 14.2%. Mechanism analysis shows that ReCast mitigates the persistent all-zero / single-hit regime, restores learnability when natural positives are scarce, and converts otherwise wasted rollout budget into more stable policy updates. These results suggest that, for generative recommendation, the decisive RL problem is not only how to assign rewards, but how to construct learnable optimization events from sparse, structured supervision.