UniVidX: A Unified Multimodal Framework for Versatile Video Generation via Diffusion Priors

arXiv cs.CV / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces UniVidX, a unified multimodal video generation framework that repurposes video diffusion model priors for multiple multimodal graphics tasks without training separate models per setting.
  • UniVidX reformulates pixel-aligned problems as conditional generation within a shared multimodal space, using stochastic masking to support omni-directional conditioning rather than fixed input-output mappings.
  • The framework uses Decoupled Gated LoRA to activate modality-specific low-rank adapters only when a modality is the generation target, aiming to preserve the original diffusion model priors.
  • Cross-Modal Self-Attention is designed to exchange information across modalities by sharing keys/values while keeping modality-specific queries for better cross-modal consistency.
  • Experiments on two instantiated variants (UniVid-Intrinsic for RGB plus intrinsic maps, and UniVid-Alpha for RGB blended videos plus RGBA layers) show competitive results and strong robustness even with fewer than 1,000 training videos.

Abstract

Recent progress has shown that video diffusion models (VDMs) can be repurposed for diverse multimodal graphics tasks. However, existing methods often train separate models for each problem setting, which fixes the input-output mapping and limits the modeling of correlations across modalities. We present UniVidX, a unified multimodal framework that leverages VDM priors for versatile video generation. UniVidX formulates pixel-aligned tasks as conditional generation in a shared multimodal space, adapts to modality-specific distributions while preserving the backbone's native priors, and promotes cross-modal consistency during synthesis. It is built on three key designs. Stochastic Condition Masking (SCM) randomly partitions modalities into clean conditions and noisy targets during training, enabling omni-directional conditional generation instead of fixed mappings. Decoupled Gated LoRA (DGL) introduces per-modality LoRAs that are activated when a modality serves as the generation target, preserving the strong priors of the VDM. Cross-Modal Self-Attention (CMSA) shares keys and values across modalities while keeping modality-specific queries, facilitating information exchange and inter-modal alignment. We instantiate UniVidX in two domains: UniVid-Intrinsic, for RGB videos and intrinsic maps including albedo, irradiance, and normal; and UniVid-Alpha, for blended RGB videos and their constituent RGBA layers. Experiments show that both models achieve performance competitive with state-of-the-art methods across distinct tasks and generalize robustly to in-the-wild scenarios, even when trained on fewer than 1,000 videos. Project page: https://houyuanchen111.github.io/UniVidX.github.io/