ID-LoRA: Identity-Driven Audio-Video Personalization with In-Context LoRA
arXiv cs.CV / 3/12/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- ID-LoRA jointly generates a subject's appearance and voice in a single generative pass, allowing a text prompt, a reference image, and a short audio clip to govern both modalities together.
- The method adapts the LTX-2 joint audio-video diffusion backbone via parameter-efficient In-Context LoRA and uses negative temporal positions to keep reference tokens distinct from generation tokens.
- It introduces identity guidance, a classifier-free guidance variant that amplifies speaker-specific features by contrasting predictions with and without the reference signal.
- Human preference studies show ID-LoRA is preferred over Kling 2.6 Pro for voice similarity and speaking style, with cross-environment gains, and the approach uses around 3,000 training pairs on a single GPU, with code/models/data to be released.
Related Articles
Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to
The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to
YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to