Just-in-Time: Training-Free Spatial Acceleration for Diffusion Transformers
arXiv cs.CV / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Just-in-Time (JiT) is a training-free framework that accelerates diffusion transformers by computing on a sparse set of anchor tokens, exploiting spatial redundancy rather than treating all spatial regions equally.
- JiT introduces a spatially approximated generative ordinary differential equation and a deterministic micro-flow to smoothly expand the latent state as new tokens are added, preserving structural coherence and statistical correctness.
- In experiments on the FLUX.1-dev model, JiT achieves up to 7x inference speedup with nearly lossless performance, outperforming existing acceleration methods.
- The work shifts the focus from temporal acceleration to spatial acceleration, enabling more practical deployment of diffusion-based models by reducing computational costs.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to