AI Navigate

Robust Post-Training for Generative Recommenders: Why Exponential Reward-Weighted SFT Outperforms RLHF

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes exponential reward-weighted supervised fine-tuning (SFT) with weights w = exp(r/λ) for post-training of generative recommender systems, enabling offline optimization on observed rewards.
  • It argues this approach avoids reward hacking, requires no propensity scores, and is suitable for production-scale systems where online interaction is impractical.
  • The authors provide theoretical guarantees: policy improvement with a gap that scales logarithmically with catalog size and a tunable λ that controls robustness-confidence trade-off.
  • Empirical results on three open-source and one proprietary dataset show this method consistently outperforms RLHF baselines, demonstrating scalability and effectiveness.

Abstract

Aligning generative recommender systems to user preferences via post-training is critical for closing the gap between next-item prediction and actual recommendation quality. Existing post-training methods are ill-suited for production-scale systems: RLHF methods reward hack due to noisy user feedback and unreliable reward models, offline RL alternatives require propensity scores that are unavailable, and online interaction is infeasible. We identify exponential reward-weighted SFT with weights w = \exp(r/\lambda) as uniquely suited to this setting, and provide the theoretical and empirical foundations that explain why. By optimizing directly on observed rewards without querying a learned reward model, the method is immune to reward hacking, requires no propensity scores, and is fully offline. We prove the first policy improvement guarantees for this setting under noisy rewards, showing that the gap scales only logarithmically with catalog size and remains informative even for large item catalogs. Crucially, we show that temperature \lambda explicitly and quantifiably controls the robustness-improvement tradeoff, providing practitioners with a single interpretable regularization hyperparameter with theoretical grounding. Experiments on three open-source and one proprietary dataset against four baselines confirm that exponential reward weighting is simple, scalable, and consistently outperforms RLHF-based alternatives.