AR-CoPO: Align Autoregressive Video Generation with Contrastive Policy Optimization
arXiv cs.CV / 3/19/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- AR-CoPO introduces a framework to align streaming autoregressive video generation with contrastive policy optimization, addressing alignment challenges under RLHF in AR video synthesis.
- It uses a chunk-level alignment forking mechanism that constructs neighborhood candidates at a randomly selected chunk, assigns sequence-level rewards, and performs localized GRPO updates.
- The approach includes a semi-on-policy training strategy that blends on-policy exploration with exploitation from a replay buffer of reference rollouts to improve generation quality.
- Experiments on Self-Forcing show improved out-of-domain generalization and in-domain human preference alignment over the baseline, providing evidence of genuine alignment rather than reward hacking.




