DualFact+: A Multimodal Fact Verification Framework for Procedural Video Understanding

arXiv cs.AI / 4/29/2026

📰 NewsModels & Research

Key Points

  • The paper introduces DualFact, a dual-layer multimodal evaluation framework that distinguishes conceptual facts from context-grounded facts in procedural video captioning.
  • DualFact uses implicit argument augmentation (VIA) and contrastive fact sets to perform more complete and role-consistent factual verification.
  • It provides two verification modes: DualFact-T checks against textual evidence, while DualFact-V checks against video-grounded visual evidence.
  • Experiments on YouCook3-Fact and CraftBench-Fact find that state-of-the-art multimodal LLMs often generate fluent but factually incomplete captions with systematic omissions and role inconsistencies.
  • DualFact aligns better with human factuality judgments than standard metrics, especially for contextual facts, and shows that caption-only evaluation can underestimate or mischaracterize hallucinations versus video-grounded verification.

Abstract

We introduce DualFact, a dual-layer, multimodal factuality evaluation framework for procedural video captioning. DualFact separates factual correctness into conceptual facts, capturing abstract semantic roles (e.g., Action, Ingredient, Tool, Location), and contextual facts, capturing their grounded predicate-argument realizations in video. To support complete and role-consistent evaluation, DualFact incorporates implicit argument augmentation (VIA) and contrastive fact sets. We instantiate DualFact in two modes: DualFact-T, which verifies facts against textual evidence, and DualFact-V, which verifies facts against video-grounded visual evidence. Experiments on YouCook3-Fact and CraftBench-Fact show that state-of-the-art multimodal language models produce fluent but often factually incomplete captions, with systematic omissions and role-level inconsistencies. DualFact correlates more strongly with human factuality judgments than standard metrics, particularly for contextual facts, and reveals that caption-only evaluation overestimates hallucinations compared to video-grounded verification. Overall, DualFact offers an interpretable and human-aligned evaluation protocol that highlights persistent challenges in multimodal factual grounding, extending beyond surface-level fluency.