AI Navigate

LightningRL: Breaking the Accuracy-Parallelism Trade-off of Block-wise dLLMs via Reinforcement Learning

arXiv cs.LG / 3/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • LightningRL is a post-training reinforcement learning framework designed to optimize the speed-quality Pareto frontier for pre-trained block-wise diffusion LLMs.
  • Rather than forcing uniform parallelization, it uses Group Relative Policy Optimization (GRPO) to identify and reinforce high-parallelism trajectories that maintain generation accuracy.
  • The method introduces per-reward decoupled normalization, token-level NLL regularization on correct trajectories, and a dynamic sampling strategy with TPF-aware filtering to stabilize training and improve efficiency.
  • Experimental results across mathematical and coding benchmarks show that LightningRL advances the Pareto frontier, achieving an average TPF of 7.32 and a peak of 11.10 on MBPP, with code released at the linked GitHub repository.

Abstract

Diffusion Large Language Models (dLLMs) have emerged as a promising paradigm for parallel token generation, with block-wise variants garnering significant research interest. Despite their potential, existing dLLMs typically suffer from a rigid accuracy-parallelism trade-off: increasing the number of tokens per forward (TPF) via aggressive parallel decoding often leads to performance degradation and increased generation instability. We identify that this limitation stems from the model's inability to navigate high-parallelism regimes where approximation errors and local corruptions accumulate, ultimately undermining the reliability of parallel generation. To address this, we propose LightningRL, a post-training framework designed to directly optimize the speed-quality Pareto frontier of pre-trained dLLMs. Instead of forcing uniform parallelization, our approach leverages reinforcement learning to identify and reinforce high-parallelism trajectories that maintain generation accuracy. Built upon the Group Relative Policy Optimization (GRPO) framework, LightningRL introduces several enhancements tailored for dLLMs: (1) stabilized training via per-reward decoupled normalization; (2) token-level negative log-likelihood (NLL) regularization on correct trajectories to anchor model performance; and (3) a dynamic sampling strategy with TPF-aware filtering to enhance training efficiency. Experimental results across mathematical and coding benchmarks demonstrate that LightningRL consistently advances the Pareto frontier, achieving competitive task accuracy while significantly increasing parallelism, reaching an average TPF of 7.32 (with a peak of 11.10 on the MBPP dataset). Our code is available at https://github.com/SJTU-DENG-Lab/LightningRL.