AR-CoPO: Align Autoregressive Video Generation with Contrastive Policy Optimization
arXiv cs.CV / 3/19/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- AR-CoPO introduces a framework to align streaming autoregressive video generation with contrastive policy optimization, addressing alignment challenges under RLHF in AR video synthesis.
- It uses a chunk-level alignment forking mechanism that constructs neighborhood candidates at a randomly selected chunk, assigns sequence-level rewards, and performs localized GRPO updates.
- The approach includes a semi-on-policy training strategy that blends on-policy exploration with exploitation from a replay buffer of reference rollouts to improve generation quality.
- Experiments on Self-Forcing show improved out-of-domain generalization and in-domain human preference alignment over the baseline, providing evidence of genuine alignment rather than reward hacking.
Related Articles
I Was Wrong About AI Coding Assistants. Here's What Changed My Mind (and What I Built About It).
Dev.to

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
A supervisor or "manager" Al agent is the wrong way to control Al
Reddit r/artificial
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA