TurboTalk: Progressive Distillation for One-Step Audio-Driven Talking Avatar Generation
arXiv cs.CV / 4/17/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces TurboTalk, a progressive distillation framework designed to convert a multi-step audio-driven talking-avatar diffusion model into a single-step generator.
- It uses a two-stage approach: first applying Distribution Matching Distillation to train a stable 4-step “student,” then using adversarial distillation to progressively reduce denoising steps from 4 down to 1.
- To prevent training instability during extreme step reduction, TurboTalk adds progressive timestep sampling and a self-compare adversarial objective that stabilizes the distillation process.
- Experiments report single-step video generation with a claimed 120× inference speedup while maintaining high generation quality.
- The work targets practical deployment constraints by substantially reducing computational overhead inherent in multi-step denoising pipelines.



