New AI model generates 45-minute lip-synced video from one photo and runs in real time

THE DECODER / 4/14/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The research project LPM 1.0 can generate a 45-minute, lip-synced talking-video sequence from a single input photo while producing facial expressions and emotional reactions.
  • The model is described as running in real time, enabling interactive generation rather than only offline rendering.
  • The output includes synchronized mouth movement aligned to speech content, focusing on avatar-style character animation driven by static imagery.
  • Despite the strong capabilities, the work is still positioned as a research prototype rather than a publicly available product.

A single image becomes a talking character: LPM 1.0 generates real-time video with lip sync, facial expressions, and emotional reactions. For now, it remains a research project.

The article New AI model generates 45-minute lip-synced video from one photo and runs in real time appeared first on The Decoder.