MAR-GRPO: Stabilized GRPO for AR-diffusion Hybrid Image Generation
arXiv cs.CV / 4/9/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies why applying reinforcement learning to hybrid autoregressive–diffusion (AR-diffusion) image generation is unstable, focusing on noisy log-probability gradients caused by the diffusion component during interleaved inference.
- It proposes MAR-GRPO, a stabilized RL training framework for masked autoregressive models that uses multi-trajectory expectation (MTE) to average over multiple diffusion trajectories and reduce gradient noise.
- To prevent over-smoothing, it estimates token-wise uncertainty from multiple trajectories and applies multi-trajectory optimization only to the top-k% most uncertain tokens.
- It further introduces a consistency-aware token selection strategy to filter AR tokens that are poorly aligned with the final generated content.
- Experiments across multiple benchmarks show improvements in visual quality, training stability, and spatial structure understanding versus GRPO and pre-RL baselines, with code released on GitHub.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to