Structured Causal Video Reasoning via Multi-Objective Alignment

arXiv cs.CL / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing Video-LLMs often perform inefficient and fragile causal inference because they rely on unstructured text rather than a structured mental model of entities, actions, and temporal relations.
  • It proposes a compact structured prior called Structured Event Facts that captures salient events and explicit causal relationships before the main reasoning stage.
  • To train models on these structured facts, the authors introduce the CausalFact-60K dataset and a four-stage pipeline (facts alignment, format warm-start, thinking warm-start, and RL-based post-training).
  • During reinforcement learning, the work treats competing goals—structural completeness, causal fidelity, and reasoning length—as a Multi-Objective RL (MORL) problem and optimizes toward the Pareto frontier to manage trade-offs.
  • The resulting model, Factum-4B, is reported to produce more reliable reasoning and improved performance on video understanding benchmarks that require fine-grained temporal causal inference.

Abstract

Human understanding of video dynamics is typically grounded in a structured mental representation of entities, actions, and temporal relations, rather than relying solely on immediate deductive reasoning. In contrast, existing Video-LLMs largely depend on unstructured video reasoning, where critical visual evidence is embedded in verbose textual descriptions and temporal causality is often weakly modeled. This leads to inefficient processes and fragile causal inference. To bridge this cognitive gap, we propose constructing a compact representation of salient events and their causal relationships, which we name Structured Event Facts, prior to the reasoning stage. This structured prior serves as an explicit constraint to promote concise and causally grounded reasoning, while also making intermediate evidence easier to verify. To effectively train models on such structured facts, we introduce CausalFact-60K and a four-stage training pipeline comprising facts alignment, format warm-start, thinking warm-start, and reinforcement learning-based post-training. During RL stage, we find that this framework introduces competing objectives, as structural completeness and causal fidelity must be balanced against reasoning length, making it difficult to optimize. We address this challenge by formulating the optimization as a Multi-Objective Reinforcement Learning (MORL) problem and explicitly optimizing toward the Pareto-Frontier to balance these trade-offs. As a result, we introduce Factum-4B, which yields more reliable reasoning and delivers stronger performance on challenging video understanding tasks requiring fine-grained temporal inference.