AI Navigate

Event-Driven Video Generation

arXiv cs.CV / 3/17/2026

📰 NewsModels & Research

Key Points

  • The paper identifies frame-first denoising as a primary source of interaction hallucinations in text-to-video models and proposes Event-Driven Video Generation (EVD) as a minimal DiT-compatible framework to ground sampling in events.
  • EVD introduces an event head that predicts token-aligned event activity and event-grounded losses that couple activity to state changes during training.
  • It employs event-gated sampling with hysteresis and early-step scheduling to suppress spurious updates and concentrate updates during interactions.
  • On EVD-Bench, the method improves human preferences and video dynamics, substantially reducing failure modes in state persistence, spatial accuracy, support relations, and contact stability without sacrificing appearance.
  • The results suggest explicit event grounding as a practical abstraction for reducing interaction-related errors in video generation.

Abstract

State-of-the-art text-to-video models often look realistic frame-by-frame yet fail on simple interactions: motion starts before contact, actions are not realized, objects drift after placement, and support relations break. We argue this stems from frame-first denoising, which updates latent state everywhere at every step without an explicit notion of when and where an interaction is active. We introduce Event-Driven Video Generation (EVD), a minimal DiT-compatible framework that makes sampling event-grounded: a lightweight event head predicts token-aligned event activity, event-grounded losses couple activity to state change during training, and event-gated sampling (with hysteresis and early-step scheduling) suppresses spurious updates while concentrating updates during interactions. On EVD-Bench, EVD consistently improves human preference and VBench dynamics, substantially reducing failure modes in state persistence, spatial accuracy, support relations, and contact stability without sacrificing appearance. These results indicate that explicit event grounding is a practical abstraction for reducing interaction hallucinations in video generation.