AI Navigate

LADR: Locality-Aware Dynamic Rescue for Efficient Text-to-Image Generation with Diffusion Large Language Models

arXiv cs.CV / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • LADR is a training-free method that accelerates inference for discrete diffusion language models used in text-to-image generation by exploiting the 2D spatial locality of images.
  • The approach prioritizes recovering tokens at the generation frontier—areas adjacent to already observed pixels—using morphological neighbor identification and risk-bounded filtering to minimize error propagation.
  • It introduces manifold-consistent inverse scheduling to align the diffusion trajectory with the accelerated mask density, enabling approximately 4x speedups on four benchmarks.
  • Despite the speedup, LADR maintains or even improves generative fidelity, particularly in spatial reasoning tasks, offering a strong efficiency-versus-quality trade-off.

Abstract

Discrete Diffusion Language Models have emerged as a compelling paradigm for unified multimodal generation, yet their deployment is hindered by high inference latency arising from iterative decoding. Existing acceleration strategies often require expensive re-training or fail to leverage the 2D spatial redundancy inherent in visual data. To address this, we propose Locality-Aware Dynamic Rescue (LADR), a training-free method that expedites inference by exploiting the spatial Markov property of images. LADR prioritizes the recovery of tokens at the ''generation frontier'', regions spatially adjacent to observed pixels, thereby maximizing information gain. Specifically, our method integrates morphological neighbor identification to locate candidate tokens, employs a risk-bounded filtering mechanism to prevent error propagation, and utilizes manifold-consistent inverse scheduling to align the diffusion trajectory with the accelerated mask density. Extensive experiments on four text-to-image generation benchmarks demonstrate that our LADR achieves an approximate 4 x speedup over standard baselines. Remarkably, it maintains or even enhances generative fidelity, particularly in spatial reasoning tasks, offering a state-of-the-art trade-off between efficiency and quality.