UniVid: Pyramid Diffusion Model for High Quality Video Generation
arXiv cs.CV / 3/17/2026
📰 NewsModels & Research
Key Points
- UniVid is a unified video generation model that enables T2V, I2V, and (T+I)2V generation by using both text prompts and a reference image as controls.
- It scales up a pre-trained text-to-image diffusion backbone and adds temporal-pyramid cross-frame attention modules and convolutions to produce temporally coherent video frames.
- It introduces a dual-stream cross-attention mechanism whose attention scores can be re-weighted to interpolate between single-modal and bimodal controls during inference.
- Experimental results show UniVid achieves superior temporal coherence across T2V, I2V, and (T+I)2V tasks.
Related Articles
Self-Refining Agents in Spec-Driven Development
Dev.to

has anyone tried this? Flash-MoE: Running a 397B Parameter Model on a Laptop
Reddit r/LocalLLaMA

M2.7 open weights coming in ~2 weeks
Reddit r/LocalLLaMA

MiniMax M2.7 Will Be Open Weights
Reddit r/LocalLLaMA
Best open source coding models for claude code? LB?
Reddit r/LocalLLaMA