MPDiT: Multi-Patch Global-to-Local Transformer Architecture For Efficient Flow Matching and Diffusion Model

arXiv cs.CV / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes MPDiT, a multi-patch global-to-local transformer architecture for diffusion/flow-matching models that processes larger patches in early blocks and smaller patches in later blocks to capture coarse context then refine details.
  • It claims the hierarchical patching strategy can cut training compute by up to ~50% in GFLOPs while maintaining strong generative performance.
  • MPDiT also includes improved time and class embedding designs intended to accelerate training convergence.
  • Experiments on ImageNet are reported to validate the architectural and embedding choices.
  • The authors release code via GitHub, enabling others to reproduce and build upon the approach.

Abstract

Transformer architectures, particularly Diffusion Transformers (DiTs), have become widely used in diffusion and flow-matching models due to their strong performance compared to convolutional UNets. However, the isotropic design of DiTs processes the same number of patchified tokens in every block, leading to relatively heavy computation during training process. In this work, we introduce a multi-patch transformer design in which early blocks operate on larger patches to capture coarse global context, while later blocks use smaller patches to refine local details. This hierarchical design could reduces computational cost by up to 50\% in GFLOPs while achieving good generative performance. In addition, we also propose improved designs for time and class embeddings that accelerate training convergence. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our architectural choices. Code is released at \url{https://github.com/quandao10/MPDiT}