AI Navigate

Just-in-Time: Training-Free Spatial Acceleration for Diffusion Transformers

arXiv cs.CV / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Just-in-Time (JiT) is a training-free framework that accelerates diffusion transformers by computing on a sparse set of anchor tokens, exploiting spatial redundancy rather than treating all spatial regions equally.
  • JiT introduces a spatially approximated generative ordinary differential equation and a deterministic micro-flow to smoothly expand the latent state as new tokens are added, preserving structural coherence and statistical correctness.
  • In experiments on the FLUX.1-dev model, JiT achieves up to 7x inference speedup with nearly lossless performance, outperforming existing acceleration methods.
  • The work shifts the focus from temporal acceleration to spatial acceleration, enabling more practical deployment of diffusion-based models by reducing computational costs.

Abstract

Diffusion Transformers have established a new state-of-the-art in image synthesis, but the high computational cost of iterative sampling severely hampers their practical deployment. While existing acceleration methods often focus on the temporal domain, they overlook the substantial spatial redundancy inherent in the generative process, where global structures emerge long before fine-grained details are formed. The uniform computational treatment of all spatial regions represents a critical inefficiency. In this paper, we introduce Just-in-Time (JiT), a novel training-free framework that addresses this challenge by acceleration in the spatial domain. JiT formulates a spatially approximated generative ordinary differential equation (ODE) that drives the full latent state evolution based on computations from a dynamically selected, sparse subset of anchor tokens. To ensure seamless transitions as new tokens are incorporated to expand the dimensions of the latent state, we propose a deterministic micro-flow, a simple and effective finite-time ODE that maintains both structural coherence and statistical correctness. Extensive experiments on the state-of-the-art FLUX.1-dev model demonstrate that JiT achieves up to a 7x speedup with nearly lossless performance, significantly outperforming existing acceleration methods and establishing a new and superior trade-off between inference speed and generation fidelity.