Prompt-to-Gesture: Measuring the Capabilities of Image-to-Video Deictic Gesture Generation

arXiv cs.CV / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key bottleneck in gesture recognition research: scarce data and the high cost of collecting authentic human recordings.
  • It proposes a prompt-based image-to-video generation pipeline to create a realistic dataset of deictic (pointing/indicating) gestures from only a small set of human reference samples.
  • The authors evaluate the synthetic deictic gestures for both visual fidelity and for added variability/novelty compared with real gesture data.
  • Experimental results suggest that combining synthetic and real data improves the performance of multiple downstream deep learning models, indicating the synthetic data is genuinely useful.
  • The work concludes that early-stage image-to-video generative techniques can serve as a powerful zero-shot approach for gesture synthesis and can complement human-generated datasets.

Abstract

Gesture recognition research, unlike NLP, continues to face acute data scarcity, with progress constrained by the need for costly human recordings or image processing approaches that cannot generate authentic variability in the gestures themselves. Recent advancements in image-to-video foundation models have enabled the generation of photorealistic, semantically rich videos guided by natural language. These capabilities open up new possibilities for creating effort-free synthetic data, raising the critical question of whether video Generative AI models can augment and complement traditional human-generated gesture data. In this paper, we introduce and analyze prompt-based video generation to construct a realistic deictic gestures dataset and rigorously evaluate its effectiveness for downstream tasks. We propose a data generation pipeline that produces deictic gestures from a small number of reference samples collected from human participants, providing an accessible approach that can be leveraged both within and beyond the machine learning community. Our results demonstrate that the synthetic gestures not only align closely with real ones in terms of visual fidelity but also introduce meaningful variability and novelty that enrich the original data, further supported by superior performance of various deep models using a mixed dataset. These findings highlight that image-to-video techniques, even in their early stages, offer a powerful zero-shot approach to gesture synthesis with clear benefits for downstream tasks.