Accelerating Diffusion-based Video Editing via Heterogeneous Caching: Beyond Full Computing at Sampled Denoising Timestep

arXiv cs.CV / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing diffusion video editing acceleration mainly reuses features across denoising timesteps, but fails to address redundant computation inside the DiT architecture’s attention over spatio-temporal tokens.
  • It proposes HetCache, a training-free acceleration framework that exploits heterogeneity in masked video-to-video (MV2V) generation/editing by separating DiT tokens into context and generative groups.
  • HetCache uses spatial priors to selectively cache only context tokens that have the strongest correlation and most representative semantics relative to generative tokens at chosen compute steps.
  • By reducing unnecessary attention operations while preserving editing consistency, the method achieves about 2.67× latency speedup and FLOPs reduction over commonly used foundation models with negligible quality degradation.

Abstract

Diffusion-based video editing has emerged as an important paradigm for high-quality and flexible content generation. However, despite their generality and strong modeling capacity, Diffusion Transformers (DiT) remain computationally expensive due to the iterative denoising process, posing challenges for practical deployment. Existing video diffusion acceleration methods primarily exploit denoising timestep-level feature reuse, which mitigates the redundancy in denoising process, but overlooks the architectural redundancy within the DiT that many attention operations over spatio-temporal tokens are redundantly executed, offering little to no incremental contribution to the model output. This work introduces HetCache, a training-free diffusion acceleration framework designed to exploit the inherent heterogeneity in diffusion-based masked video-to-video (MV2V) generation and editing. Instead of uniformly reuse or randomly sampling tokens, HetCache assesses the contextual relevance and interaction strength among various types of tokens in designated computing steps. Guided by spatial priors, it divides the spatial-temporal tokens in DiT model into context and generative tokens, and selectively caches the context tokens that exhibit the strongest correlation and most representative semantics with generative ones. This strategy reduces redundant attention operations while maintaining editing consistency and fidelity. Experiments show that HetCache achieves a noticeable acceleration, including a 2.67\times latency speedup and FLOPs reduction over commonly used foundation models, with negligible degradation in editing quality.