LPM 1.0: Video-based Character Performance Model
arXiv cs.CV / 4/10/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper introduces LPM 1.0 (Large Performance Model), which learns video-based character performance—intent, emotion, and personality—from audio-visual conversation to avoid traditional 3D character pipelines.
- It targets a stated “performance trilemma” by jointly improving expressiveness, real-time inference, and long-horizon identity stability, focusing specifically on single-person full-duplex audio-visual conversational performance.
- LPM 1.0 builds a multimodal human-centric dataset using strict filtering and identity-aware multi-reference extraction, then trains a 17B-parameter Diffusion Transformer for controllable, identity-consistent generation via multimodal conditioning.
- The model is distilled into an Online LPM causal streaming generator designed for low-latency, infinite-length interactions, enabling real-time listening/speaking video synthesis from user audio and synthesized speech.
- The work also proposes LPM-Bench, a new benchmark for interactive character performance, reporting state-of-the-art results across evaluated dimensions.
Related Articles

Black Hat Asia
AI Business
v0.20.5
Ollama Releases

Inside Anthropic's Project Glasswing: The AI Model That Found Zero-Days in Every Major OS
Dev.to
Gemma 4 26B fabricated an entire code audit. I have the forensic evidence from the database.
Reddit r/LocalLLaMA

SoloEngine: Low-Code Agentic AI Development Platform with Native Support for Multi-Agent Collaboration, MCP, and Skill System
Dev.to