Reward Weighted Classifier-Free Guidance as Policy Improvement in Autoregressive Models

arXiv cs.AI / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies autoregressive models that generate outputs summarized by attribute vectors and uses an arbitrary reward function r(y) to encode tradeoffs among properties.
  • It proposes Reward Weighted Classifier-Free Guidance (RCFG) as a policy improvement operator that approximates the effect of re-tilting the sampling distribution via the Q function.
  • Unlike reinforcement learning retraining, RCFG can optimize for new reward functions at test time, enabling re-alignment without full re-training.
  • Experiments on molecular generation show RCFG can handle novel reward functions, and using RCFG as a teacher with distillation can significantly speed up convergence for standard RL warm-starts.

Abstract

Consider an auto-regressive model that produces outputs x (e.g., answers to questions, molecules) each of which can be summarized by an attribute vector y (e.g., helpfulness vs. harmlessness, or bio-availability vs. lipophilicity). An arbitrary reward function r(y) encodes tradeoffs between these properties. Typically, tilting the model's sampling distribution to increase this reward is done at training time via reinforcement learning. However, if the reward function changes, re-alignment requires re-training. In this paper, we show that a reward weighted classifier-free guidance (RCFG) can act as a policy improvement operator in this setting, approximating tilting the sampling distribution by the Q function. We apply RCFG to molecular generation, demonstrating that it can optimize novel reward functions at test time. Finally, we show that using RCFG as a teacher and distilling into the base policy to serve as a warm start significantly speeds up convergence for standard RL.