DiT as Real-Time Rerenderer: Streaming Video Stylization with Autoregressive Diffusion Transformer

arXiv cs.CV / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces RTR-DiT, a streaming video stylization framework that uses a Diffusion Transformer to improve stability and consistency on long videos.
  • It fine-tunes a bidirectional teacher model for both text-guided and reference-guided stylization, then compresses it into a few-step autoregressive model using Self Forcing and Distribution Matching Distillation.
  • A reference-preserving KV cache update strategy is proposed to maintain consistency across long sequences and enable real-time switching between text prompts and reference images.
  • Experiments report that RTR-DiT outperforms prior diffusion-based stylization approaches on both quantitative metrics and visual quality, while supporting real-time interactive applications.

Abstract

Recent advances in video generation models has significantly accelerated video generation and related downstream tasks. Among these, video stylization holds important research value in areas such as immersive applications and artistic creation, attracting widespread attention. However, existing diffusion-based video stylization methods struggle to maintain stability and consistency when processing long videos, and their high computational cost and multi-step denoising make them difficult to apply in practical scenarios. In this work, we propose RTR-DiT (DiT as Real-Time Rerenderer), a steaming video stylization framework built upon Diffusion Transformer. We first fine-tune a bidirectional teacher model on a curated video stylization dataset, supporting both text-guided and reference-guided video stylization tasks, and subsequently distill it into a few-step autoregressive model via post-training with Self Forcing and Distribution Matching Distillation. Furthermore, we propose a reference-preserving KV cache update strategy that not only enables stable and consistent processing of long videos, but also supports real-time switching between text prompts and reference images. Experimental results show that RTR-DiT outperforms existing methods in both text-guided and reference-guided video stylization tasks, in terms of quantitative metrics and visual quality, and demonstrates excellent performance in real-time long video stylization and interactive style-switching applications.