Denoising, Fast and Slow: Difficulty-Aware Adaptive Sampling for Image Generation
arXiv cs.CV / 4/22/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Diffusion/flow image generation models typically use uniform compute across all image patches, but natural images vary in how difficult different regions are to denoise.
- The paper shows that simply assigning different timesteps per token can hurt performance because it trains the model on noisy states that won’t actually appear at inference.
- It proposes a timestep sampler that constrains the maximum patch-level information available during training, and demonstrates that moving from global to patch-level timesteps improves results versus standard baselines.
- Building on this, the authors add a lightweight per-patch “difficulty head” to drive adaptive compute allocation, and introduce Patch Forcing (PF), which schedules easier regions earlier to provide context for harder ones; PF improves class-conditional ImageNet and generalizes to text-to-image while remaining compatible with other techniques.
Related Articles
I’m working on an AGI and human council system that could make the world better and keep checks and balances in place to prevent catastrophes. It could change the world. Really. Im trying to get ahead of the game before an AGI is developed by someone who only has their best interest in mind.
Reddit r/artificial
Deepseek V4 Flash and Non-Flash Out on HuggingFace
Reddit r/LocalLLaMA

DeepSeek V4 Flash & Pro Now out on API
Reddit r/LocalLLaMA

I’m building a post-SaaS app catalog on Base, and here’s what that actually means
Dev.to

From "Hello World" to "Hello Agents": The Developer Keynote That Rewired Software Engineering
Dev.to