DiffSparse: Accelerating Diffusion Transformers with Learned Token Sparsity

arXiv cs.CV / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • DiffSparse proposes a differentiable, layer-wise sparsity optimization framework for diffusion transformer image generation to cut the high compute cost of multi-step inference.
  • The approach combines token caching with an end-to-end learnable sparsity allocation network and a dynamic programming solver, targeting inefficiencies in prior caching and sparsity strategies.
  • A two-stage training strategy further avoids full-step processing used in earlier token-cache methods, improving inference efficiency.
  • Experiments across multiple diffusion-transformer models (e.g., DiT-XL/2, PixArt-α, FLUX, Wan2.1) show consistent efficiency gains without quality degradation.
  • On PixArt-α at 20 sampling steps, DiffSparse reports a 54% reduction in computational cost while achieving generation metrics that outperform the baseline and prior methods.

Abstract

Diffusion models demonstrate outstanding performance in image generation, but their multi-step inference mechanism requires immense computational cost. Previous works accelerate inference by leveraging layer or token cache techniques to reduce computational cost. However, these methods fail to achieve superior acceleration performance in few-step diffusion transformer models due to inefficient feature caching strategies, manually designed sparsity allocation, and the practice of retaining complete forward computations in several steps in these token cache methods. To tackle these challenges, we propose a differentiable layer-wise sparsity optimization framework for diffusion transformer models, leveraging token caching to reduce token computation costs and enhance acceleration. Our method optimizes layer-wise sparsity allocation in an end-to-end manner through a learnable network combined with a dynamic programming solver. Additionally, our proposed two-stage training strategy eliminates the need for full-step processing in existing methods, further improving efficiency. We conducted extensive experiments on a range of diffusion-transformer models, including DiT-XL/2, PixArt-\alpha, FLUX, and Wan2.1. Across these architectures, our method consistently improves efficiency without degrading sample quality. For example, on PixArt-\alpha with 20 sampling steps, we reduce computational cost by 54\% while achieving generation metrics that surpass those of the original model, substantially outperforming prior approaches. These results demonstrate that our method delivers large efficiency gains while often improving generation quality.