AI Navigate

Improving Joint Audio-Video Generation with Cross-Modal Context Learning

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Cross-Modal Context Learning (CCL) to improve joint audio-video generation by addressing dual-stream transformer limitations such as gating-induced model variations, cross-modal attention biases, CFG inconsistencies, and conflicts between multiple conditions, while leveraging pre-trained video and audio diffusion models.
  • It introduces Temporally Aligned RoPE and Partitioning (TARP) to boost temporal alignment between audio and video latent representations, and Learnable Context Tokens (LCT) with Dynamic Context Routing (DCR) inside Cross-Modal Context Attention (CCA) to provide stable unconditional anchors and task-aware routing.
  • During inference, Unconditional Context Guidance (UCG) leverages the unconditional support from LCT to improve train-inference consistency across different CFG setups, reducing conflicts.
  • Empirical evaluation shows state-of-the-art performance with substantially fewer computational resources than recent methods.

Abstract

The dual-stream transformer architecture-based joint audio-video generation method has become the dominant paradigm in current research. By incorporating pre-trained video diffusion models and audio diffusion models, along with a cross-modal interaction attention module, high-quality, temporally synchronized audio-video content can be generated with minimal training data. In this paper, we first revisit the dual-stream transformer paradigm and further analyze its limitations, including model manifold variations caused by the gating mechanism controlling cross-modal interactions, biases in multi-modal background regions introduced by cross-modal attention, and the inconsistencies in multi-modal classifier-free guidance (CFG) during training and inference, as well as conflicts between multiple conditions. To alleviate these issues, we propose Cross-Modal Context Learning (CCL), equipped with several carefully designed modules. Temporally Aligned RoPE and Partitioning (TARP) effectively enhances the temporal alignment between audio latent and video latent representations. The Learnable Context Tokens (LCT) and Dynamic Context Routing (DCR) in the Cross-Modal Context Attention (CCA) module provide stable unconditional anchors for cross-modal information, while dynamically routing based on different training tasks, further enhancing the model's convergence speed and generation quality. During inference, Unconditional Context Guidance (UCG) leverages the unconditional support provided by LCT to facilitate different forms of CFG, improving train-inference consistency and further alleviating conflicts. Through comprehensive evaluations, CCL achieves state-of-the-art performance compared with recent academic methods while requiring substantially fewer resources.