One Identity, Many Roles: Multimodal Entity Coreference for Enhanced Video Situation Recognition

arXiv cs.CV / 4/28/2026

📰 NewsModels & Research

Key Points

  • The paper targets Video Situation Recognition by tackling “who did what to whom, with what, how, and where” and requiring event-role identification with short descriptions across multiple events.
  • It proposes Multimodal Entity Coreference (MEC), linking entity mentions in text with entity grounding in the video through a consistent entity identification framework.
  • The authors introduce CineMEC, a multi-stage method that connects event-role mention groups to visual entity clusters while avoiding explicit grounding supervision during training.
  • They extend the VidSitu dataset with grounding annotations and report improved results, including better captioning quality (CIDEr +2.5%, LEA +7%) and stronger visual grounding (HOTA +18%).

Abstract

Video Situation Recognition (VidSitu) addresses the challenging problem of "who did what to whom, with what, how, and where" in a video. It tests thorough video understanding by requiring identification of salient actions and associated short descriptions for event roles across multiple events. Grounding with VidSitu requires spatio-temporal localization of key entities across shots and varied appearances. We posit that coherent video understanding requires consistent identification of entities that play different roles. We propose Multimodal Entity Coreference (MEC) to unite entity descriptions in text with grounding across the video. Towards this, we introduce CineMEC, a multi-stage approach that unites event role mention groups with visual clusters of entities, without explicit grounding supervision during training. Our approach is designed to exploit the synergy between visual grounding and captioning, where improving one influences the other and vice versa. For evaluation, we extend the VidSitu dataset with grounding annotations. While previous work focuses primarily on descriptions, CineMEC improves consistency across both: captioning (+2.5% CIDEr, +7% LEA) and visual grounding (+18% HOTA).

One Identity, Many Roles: Multimodal Entity Coreference for Enhanced Video Situation Recognition | AI Navigate