AI Navigate

UniVid: Pyramid Diffusion Model for High Quality Video Generation

arXiv cs.CV / 3/17/2026

📰 NewsModels & Research

Key Points

  • UniVid is a unified video generation model that enables T2V, I2V, and (T+I)2V generation by using both text prompts and a reference image as controls.
  • It scales up a pre-trained text-to-image diffusion backbone and adds temporal-pyramid cross-frame attention modules and convolutions to produce temporally coherent video frames.
  • It introduces a dual-stream cross-attention mechanism whose attention scores can be re-weighted to interpolate between single-modal and bimodal controls during inference.
  • Experimental results show UniVid achieves superior temporal coherence across T2V, I2V, and (T+I)2V tasks.

Abstract

Diffusion-based text-to-video generation (T2V) or image-to-video (I2V) generation have emerged as a prominent research focus. However, there exists a challenge in integrating the two generative paradigms into a unified model. In this paper, we present a unified video generation model (UniVid) with hybrid conditions of the text prompt and reference image. Given these two available controls, our model can extract objects' appearance and their motion descriptions from textual prompts, while obtaining texture details and structural information from image clues to guide the video generation process. Specifically, we scale up the pre-trained text-to-image diffusion model for generating temporally coherent frames via introducing our temporal-pyramid cross-frame spatial-temporal attention modules and convolutions. To support bimodal control, we introduce a dual-stream cross-attention mechanism, whose attention scores can be freely re-weighted for interpolation of between single and two modalities controls during inference. Extensive experiments showcase that our UniVid achieves superior temporal coherence on T2V, I2V and (T+I)2V tasks.