AI Navigate

Streaming Autoregressive Video Generation via Diagonal Distillation

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • Large pretrained diffusion models improve video quality but struggle with real-time streaming due to computational demands.
  • Autoregressive video models are efficient sequential frame synthesizers but face challenges in balancing fidelity and computation.
  • Existing video diffusion distillation methods adapt image-based techniques, resulting in underperformance for videos due to ignored temporal dependencies.
  • The proposed Diagonal Distillation method improves temporal context usage and addresses exposure bias by employing an asymmetric generation strategy with varying denoising steps.
  • This approach achieves significant speedup (277.3x) generating 5-second videos at up to 31 FPS, while maintaining motion quality through implicit optical flow modeling.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09488 (cs)
[Submitted on 10 Mar 2026]

Title:Streaming Autoregressive Video Generation via Diagonal Distillation

View a PDF of the paper titled Streaming Autoregressive Video Generation via Diagonal Distillation, by Jinxiu Liu and 5 other authors
View PDF HTML (experimental)
Abstract:Large pretrained diffusion models have significantly enhanced the quality of generated videos, and yet their use in real-time streaming remains limited. Autoregressive models offer a natural framework for sequential frame synthesis but require heavy computation to achieve high fidelity. Diffusion distillation can compress these models into efficient few-step variants, but existing video distillation approaches largely adapt image-specific methods that neglect temporal dependencies. These techniques often excel in image generation but underperform in video synthesis, exhibiting reduced motion coherence, error accumulation over long sequences, and a latency-quality trade-off. We identify two factors that result in these limitations: insufficient utilization of temporal context during step reduction and implicit prediction of subsequent noise levels in next-chunk prediction (i.e., exposure bias). To address these issues, we propose Diagonal Distillation, which operates orthogonally to existing approaches and better exploits temporal information across both video chunks and denoising steps. Central to our approach is an asymmetric generation strategy: more steps early, fewer steps later. This design allows later chunks to inherit rich appearance information from thoroughly processed early chunks, while using partially denoised chunks as conditional inputs for subsequent synthesis. By aligning the implicit prediction of subsequent noise levels during chunk generation with the actual inference conditions, our approach mitigates error propagation and reduces oversaturation in long-range sequences. We further incorporate implicit optical flow modeling to preserve motion quality under strict step constraints. Our method generates a 5-second video in 2.61 seconds (up to 31 FPS), achieving a 277.3x speedup over the undistilled model.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09488 [cs.CV]
  (or arXiv:2603.09488v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09488
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Xuanming Liu [view email]
[v1] Tue, 10 Mar 2026 10:45:24 UTC (12,315 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.