A Progressive Training Strategy for Vision-Language Models to Counteract Spatio-Temporal Hallucinations in Embodied Reasoning

arXiv cs.AI / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key limitation of vision-language models (VLMs) in embodied spatiotemporal reasoning, focusing on “multi-image reasoning hallucinations” where forward vs. reverse temporal queries diverge sharply due to shortcut learning.
  • It introduces a new Chain-of-Thought (CoT) dataset that breaks complex spatiotemporal reasoning into step-by-step components with clear spatiotemporal judgments.
  • The authors propose a progressive training strategy: supervised pre-training on the CoT dataset to establish logical/spatiotemporal structure, followed by fine-tuning with weakly labeled data to improve generalization.
  • Experiments show improved backbone accuracy and a dramatic reduction in the forward-backward performance gap from over 70% to 6.53%, indicating more authentic dynamic reasoning and reduced temporal bias.

Abstract

Vision-Language Models (VLMs) have made significant strides in static image understanding but continue to face critical hurdles in spatiotemporal reasoning. A major bottleneck is "multi-image reasoning hallucination", where a massive performance drop between forward and reverse temporal queries reveals a dependence on superficial shortcuts instead of genuine causal understanding. To mitigate this, we first develop a new Chain-of-Thought (CoT) dataset that decomposes intricate reasoning into detailed spatiotemporal steps and definitive judgments. Building on this, we present a progressive training framework: it initiates with supervised pre-training on our CoT dataset to instill logical structures, followed by fine-tuning with scalable weakly-labeled data for broader generalization. Our experiments demonstrate that this approach not only improves backbone accuracy but also slashes the forward-backward performance gap from over 70\% to only 6.53\%. This confirms the method's ability to develop authentic dynamic reasoning and reduce the inherent temporal biases of current VLMs.