Exploring the Role of Synthetic Data Augmentation in Controllable Human-Centric Video Generation

arXiv cs.CV / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how synthetic data can improve controllable, human-centric video generation where motion and appearance are explicitly guided.
  • It addresses a key bottleneck: limited large-scale, diverse, and privacy-safe human video datasets—especially problematic for rare identities and complex actions.
  • The authors propose a diffusion-based framework that offers fine-grained control and a unified testbed to analyze the interaction between synthetic and real data during training.
  • Extensive experiments show complementary effects of synthetic and real data and suggest efficient ways to select synthetic samples to improve motion realism, temporal consistency, and identity preservation.
  • The work positions itself as the first comprehensive exploration of synthetic data’s role in this area and delivers practical guidance for building more data-efficient and generalizable generative models.

Abstract

Controllable human video generation aims to produce realistic videos of humans with explicitly guided motions and appearances,serving as a foundation for digital humans, animation, and embodied AI.However, the scarcity of largescale, diverse, and privacy safe human video datasets poses a major bottleneck, especially for rare identities and complex actions.Synthetic data provides a scalable and controllable alternative,yet its actual contribution to generative modeling remains underexplored due to the persistent Sim2Real gap.In this work,we systematically investigate the impact of synthetic data on controllable human video generation. We propose a diffusion-based framework that enables fine-grained control over appearance and motion while providing a unfied testbed to analyze how synthetic data interacts with real world data during training. Through extensive experiments, we reveal the complementary roles of synthetic and real data and demonstrate possible methods for efficiently selecting synthetic samples to enhance motion realism,temporal consistency,and identity preservation.Our study offers the first comprehensive exploration of synthetic data's role in human-centric video synthesis and provides practical insights for building data-efficient and generalizable generative models.