KD-CVG: A Knowledge-Driven Approach for Creative Video Generation

arXiv cs.CV / 4/24/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces KD-CVG, a knowledge-driven method to improve creative video generation for advertising, which is less studied than text-and-image creative generation.
  • It targets two key Text-to-Video challenges—ambiguous semantic alignment between product selling points and video content, and inadequate motion adaptability causing unrealistic movements.
  • KD-CVG builds an Advertising Creative Knowledge Base (ACKB) and uses two modules: Semantic-Aware Retrieval (SAR) to better connect selling points with videos via graph attention and reinforcement learning feedback, and Multimodal Knowledge Reference (MKR) to inject semantic and motion priors into the T2V model.
  • Experiments show KD-CVG achieves better semantic alignment and more realistic, adaptable motion than existing state-of-the-art approaches.
  • The authors state that the code and dataset will be open sourced at the project website provided.

Abstract

Creative Generation (CG) leverages generative models to automatically produce advertising content that highlights product features, and it has been a significant focus of recent research. However, while CG has advanced considerably, most efforts have concentrated on generating advertising text and images, leaving Creative Video Generation (CVG) relatively underexplored. This gap is largely due to two major challenges faced by Text-to-Video (T2V) models: (a) \textbf{ambiguous semantic alignment}, where models struggle to accurately correlate product selling points with creative video content, and (b) \textbf{inadequate motion adaptability}, resulting in unrealistic movements and distortions. To address these challenges, we develop a comprehensive Advertising Creative Knowledge Base (ACKB) as a foundational resource and propose a knowledge-driven approach (KD-CVG) to overcome the knowledge limitations of existing models. KD-CVG consists of two primary modules: Semantic-Aware Retrieval (SAR) and Multimodal Knowledge Reference (MKR). SAR utilizes the semantic awareness of graph attention networks and reinforcement learning feedback to enhance the model's comprehension of the connections between selling points and creative videos. Building on this, MKR incorporates semantic and motion priors into the T2V model to address existing knowledge gaps. Extensive experiments have demonstrated KD-CVG's superior performance in achieving semantic alignment and motion adaptability, validating its effectiveness over other state-of-the-art methods. The code and dataset will be open source at https://kdcvg.github.io/KDCVG/.