TS-Attn: Temporal-wise Separable Attention for Multi-Event Video Generation

arXiv cs.CV / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper addresses the unsolved challenge of generating coherent videos from complex temporal descriptions involving multiple sequential actions.
  • It identifies two main failure causes in existing approaches: temporal misalignment between the video and the prompt, and conflicting attention coupling between motion-related visual elements and their text conditions.
  • The authors propose TS-Attn, a training-free attention mechanism that dynamically rearranges attention to improve both temporal awareness and global coherence for multi-event scenarios.
  • TS-Attn can be added to various pre-trained text-to-video models, improving StoryEval-Bench scores by 33.5% (Wan2.1-T2V-14B) and 16.4% (Wan2.2-T2V-A14B) with only about a 2% increase in inference time.
  • The method is designed for plug-and-play use, including multi-event image-to-video generation, and the project code is released on GitHub.

Abstract

Generating high-quality videos from complex temporal descriptions that contain multiple sequential actions is a key unsolved problem. Existing methods are constrained by an inherent trade-off: using multiple short prompts fed sequentially into the model improves action fidelity but compromises temporal consistency, while a single complex prompt preserves consistency at the cost of prompt-following capability. We attribute this problem to two primary causes: 1) temporal misalignment between video content and the prompt, and 2) conflicting attention coupling between motion-related visual objects and their associated text conditions. To address these challenges, we propose a novel, training-free attention mechanism, Temporal-wise Separable Attention (TS-Attn), which dynamically rearranges attention distribution to ensure temporal awareness and global coherence in multi-event scenarios. TS-Attn can be seamlessly integrated into various pre-trained text-to-video models, boosting StoryEval-Bench scores by 33.5% and 16.4% on Wan2.1-T2V-14B and Wan2.2-T2V-A14B with only a 2% increase in inference time. It also supports plug-and-play usage across models for multi-event image-to-video generation. The source code and project page are available at https://github.com/Hong-yu-Zhang/TS-Attn.