AHPA: Adaptive Hierarchical Prior Alignment for Diffusion Transformers

arXiv cs.CV / 5/6/2026

📰 NewsModels & Research

Key Points

  • The paper argues that current representation alignment for Diffusion Transformers often uses fixed supervision targets or fixed alignment granularity across all denoising timesteps, which is suboptimal.
  • It claims that the appropriate alignment granularity should vary with the signal-to-noise ratio: coarse semantic/layout anchoring works better at high-noise steps, while low-noise steps benefit from spatially detailed, structurally faithful refinement.
  • To fix the resulting representational mismatch, the authors propose Adaptive Hierarchical Prior Alignment (AHPA), which leverages multi-level hierarchical features from a frozen VAE encoder instead of relying on a single compressed latent target.
  • A timestep-conditioned Dynamic Router adaptively selects and weights these hierarchical priors along the denoising trajectory, aligning supervision granularity with changing training needs.
  • Experiments indicate AHPA improves convergence and generation quality versus baselines, adds no inference-time cost, and avoids external encoder supervision during training.

Abstract

Representation alignment has recently emerged as an effective paradigm for accelerating Diffusion Transformer training. Despite their success, existing alignment methods typically impose a fixed supervision target or a fixed alignment granularity throughout the entire denoising trajectory, whether the guidance is provided by external vision encoders, internal self-representations, or VAE-derived features. We argue that such timestep-agnostic alignment is suboptimal because the useful granularity of representation supervision changes systematically with the signal-to-noise ratio. In high-noise regimes, diffusion models benefit more from coarse semantic and layout-level anchoring, whereas in low-noise regimes, the training signal should emphasize spatially detailed and structurally faithful refinement. This non-stationary alignment behavior creates a representational mismatch for static single-level supervisors. To address this issue, we propose Adaptive Hierarchical Prior Alignment (AHPA), a lightweight alignment framework that exploits the hierarchical representations naturally embedded in the frozen VAE encoder. Instead of using only a single compressed latent as the alignment target, AHPA extracts multi-level VAE features that provide complementary priors ranging from local geometry and spatial topology to coarse semantic layout. A timestep-conditioned Dynamic Router adaptively selects and weights these hierarchical priors along the denoising trajectory, thereby synchronizing the alignment granularity with the model's evolving training needs. Extensive experiments show that AHPA improves convergence and generation quality over baselines and incurs no additional inference cost while avoiding external encoder supervision during training.