AI Navigate

Reinforcement Learning for Diffusion LLMs with Entropy-Guided Step Selection and Stepwise Advantages

arXiv cs.AI / 3/16/2026

💬 OpinionModels & Research

Key Points

  • The paper reframes diffusion-based sequence generation as a finite-horizon Markov decision process over the denoising trajectory and derives an exact, unbiased policy gradient that decomposes over steps via intermediate advantages without needing sequence-level likelihoods.
  • It introduces an entropy-guided approximation bound to selectively update the policy on denoising steps, improving computational efficiency.
  • It estimates intermediate advantages using a one-step denoising reward from the diffusion model to avoid costly multi-step rollouts.
  • Empirical results on coding and logical reasoning benchmarks show state-of-the-art performance and strong gains in mathematical reasoning, outperforming existing RL post-training methods for diffusion LLMs.
  • The authors release the code at https://github.com/vishnutez/egspo-dllm-rl.

Abstract

Reinforcement learning (RL) has been effective for post-training autoregressive (AR) language models, but extending these methods to diffusion language models (DLMs) is challenging due to intractable sequence-level likelihoods. Existing approaches therefore rely on surrogate likelihoods or heuristic approximations, which can introduce bias and obscure the sequential structure of denoising. We formulate diffusion-based sequence generation as a finite-horizon Markov decision process over the denoising trajectory and derive an exact, unbiased policy gradient that decomposes over denoising steps and is expressed in terms of intermediate advantages, without requiring explicit evaluation of the sequence likelihood. To obtain a practical and compute-efficient estimator, we (i) select denoising steps for policy updates via an entropy-guided approximation bound, and (ii) estimate intermediate advantages using a one-step denoising reward naturally provided by the diffusion model, avoiding costly multi-step rollouts. Experiments on coding and logical reasoning benchmarks demonstrate state-of-the-art results, with strong competitive performance on mathematical reasoning, outperforming existing RL post-training approaches for DLMs. Code is available at https://github.com/vishnutez/egspo-dllm-rl.