Speech-Synchronized Whiteboard Generation via VLM-Driven Structured Drawing Representations

arXiv cs.LG / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a new dataset of 24 Excalidraw demonstrations paired with narrated audio, covering 8 STEM domains, with millisecond-precision timestamps for every drawing element.
  • It evaluates a LoRA fine-tuned vision-language model (Qwen2-VL-7B) to generate structured stroke sequences synchronized to speech, using only the small demonstration set.
  • Topic-stratified five-fold experiments show that conditioning on timestamps substantially improves temporal alignment versus ablated baselines.
  • The model demonstrates cross-topic generalization to unseen STEM subjects, suggesting transferability beyond the training domains.
  • The authors discuss how the approach could extend to real classroom production workflows and release the dataset and code for further research.

Abstract

Creating whiteboard-style educational videos demands precise coordination between freehand illustrations and spoken narration, yet no existing method addresses this multimodal synchronization problem with structured, reproducible drawing representations. We present the first dataset of 24 paired Excalidraw demonstrations with narrated audio, where every drawing element carries millisecond-precision creation timestamps spanning 8 STEM domains. Using this data, we study whether a vision-language model (Qwen2-VL-7B), fine-tuned via LoRA, can predict full stroke sequences synchronized to speech from only 24 demonstrations. Our topic-stratified five-fold evaluation reveals that timestamp conditioning significantly improves temporal alignment over ablated baselines, while the model generalizes across unseen STEM topics. We discuss transferability to real classroom settings and release our dataset and code to support future research in automated educational content generation.