MAR-GRPO: Stabilized GRPO for AR-diffusion Hybrid Image Generation

arXiv cs.CV / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies why applying reinforcement learning to hybrid autoregressive–diffusion (AR-diffusion) image generation is unstable, focusing on noisy log-probability gradients caused by the diffusion component during interleaved inference.
  • It proposes MAR-GRPO, a stabilized RL training framework for masked autoregressive models that uses multi-trajectory expectation (MTE) to average over multiple diffusion trajectories and reduce gradient noise.
  • To prevent over-smoothing, it estimates token-wise uncertainty from multiple trajectories and applies multi-trajectory optimization only to the top-k% most uncertain tokens.
  • It further introduces a consistency-aware token selection strategy to filter AR tokens that are poorly aligned with the final generated content.
  • Experiments across multiple benchmarks show improvements in visual quality, training stability, and spatial structure understanding versus GRPO and pre-RL baselines, with code released on GitHub.

Abstract

Reinforcement learning (RL) has been successfully applied to autoregressive (AR) and diffusion models. However, extending RL to hybrid AR-diffusion frameworks remains challenging due to interleaved inference and noisy log-probability estimation. In this work, we study masked autoregressive models (MAR) and show that the diffusion head plays a critical role in training dynamics, often introducing noisy gradients that lead to instability and early performance saturation. To address this issue, we propose a stabilized RL framework for MAR. We introduce multi-trajectory expectation (MTE), which estimates the optimization direction by averaging over multiple diffusion trajectories, thereby reducing diffusion-induced gradient noise. To avoid over-smoothing, we further estimate token-wise uncertainty from multiple trajectories and apply multi-trajectory optimization only to the top-k% uncertain tokens. In addition, we introduce a consistency-aware token selection strategy that filters out AR tokens that are less aligned with the final generated content. Extensive experiments across multiple benchmarks demonstrate that our method consistently improves visual quality, training stability, and spatial structure understanding over baseline GRPO and pre-RL models. Code is available at: https://github.com/AMAP-ML/mar-grpo.