Denoising, Fast and Slow: Difficulty-Aware Adaptive Sampling for Image Generation

arXiv cs.CV / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Diffusion/flow image generation models typically use uniform compute across all image patches, but natural images vary in how difficult different regions are to denoise.
  • The paper shows that simply assigning different timesteps per token can hurt performance because it trains the model on noisy states that won’t actually appear at inference.
  • It proposes a timestep sampler that constrains the maximum patch-level information available during training, and demonstrates that moving from global to patch-level timesteps improves results versus standard baselines.
  • Building on this, the authors add a lightweight per-patch “difficulty head” to drive adaptive compute allocation, and introduce Patch Forcing (PF), which schedules easier regions earlier to provide context for harder ones; PF improves class-conditional ImageNet and generalizes to text-to-image while remaining compatible with other techniques.

Abstract

Diffusion- and flow-based models usually allocate compute uniformly across space, updating all patches with the same timestep and number of function evaluations. While convenient, this ignores the heterogeneity of natural images: some regions are easy to denoise, whereas others benefit from more refinement or additional context. Motivated by this, we explore patch-level noise scales for image synthesis. We find that naively varying timesteps across image tokens performs poorly, as it exposes the model to overly informative training states that do not occur at inference. We therefore introduce a timestep sampler that explicitly controls the maximum patch-level information available during training, and show that moving from global to patch-level timesteps already improves image generation over standard baselines. By further augmenting the model with a lightweight per-patch difficulty head, we enable adaptive samplers that allocate compute dynamically where it is most needed. Combined with noise levels varying over both space and diffusion time, this yields Patch Forcing (PF), a framework that advances easier regions earlier so they can provide context for harder ones. PF achieves superior results on class-conditional ImageNet, remains orthogonal to representation alignment and guidance methods, and scales to text-to-image synthesis. Our results suggest that patch-level denoising schedules provide a promising foundation for adaptive image generation.