Incentivizing Temporal-Awareness in Egocentric Video Understanding Models

arXiv cs.CV / 3/31/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal LLMs (MLLMs) struggle with temporal awareness in egocentric video tasks because common training objectives do not explicitly reward temporal reasoning and instead encourage frame-level spatial shortcuts.
  • It introduces Temporal Global Policy Optimization (TGPO), an RL with verifiable rewards method that calibrates reward signals by contrasting model outputs for temporally ordered versus shuffled video frames.
  • TGPO is designed to suppress spatial shortcut behaviors and supports cold-start RL training when combined with GRPO and GSPO.
  • Experiments on five egocentric video benchmarks show TGPO improves temporal grounding and causal coherence and outperforms prior RL-based approaches for video reasoning.
  • The authors position TGPO as a simple, scalable route to building more temporally robust MLLMs for egocentric video understanding.

Abstract

Multimodal large language models (MLLMs) have recently shown strong performance in visual understanding, yet they often lack temporal awareness, particularly in egocentric settings where reasoning depends on the correct ordering and evolution of events. This deficiency stems in part from training objectives that fail to explicitly reward temporal reasoning and instead rely on frame-level spatial shortcuts. To address this limitation, we propose Temporal Global Policy Optimization (TGPO), a reinforcement learning with verifiable rewards (RLVR) algorithm designed to incentivize temporal awareness in MLLMs. TGPO contrasts model outputs generated from temporally ordered versus shuffled video frames to derive calibrated, globally normalized reward signals that explicitly favor temporally coherent reasoning. Integrated with GRPO and GSPO, TGPO supports cold-start RL training and effectively suppresses spatial shortcut behaviors learned by existing MLLMs. Experiments across five egocentric video benchmarks demonstrate that TGPO consistently improves temporal grounding and causal coherence, outperforming prior RL-based video reasoning approaches. Our results suggest that TGPO offers a simple and scalable pathway toward temporally robust MLLMs for egocentric video understanding.

Incentivizing Temporal-Awareness in Egocentric Video Understanding Models | AI Navigate