CVT-Bench: Counterfactual Viewpoint Transformations Reveal Unstable Spatial Representations in Multimodal LLMs

arXiv cs.CV / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CVT-Bench, a synthetic benchmark to test whether multimodal LLMs keep stable relational/spatial representations when hypothetical camera viewpoints change via counterfactual orbit transformations without re-rendering images.
  • Experiments across 100 scenes and 6,000 relational queries show that even state-of-the-art MLLMs can degrade noticeably under viewpoint changes, with frequent cycle-consistency violations and rapid decay in relational stability.
  • The study finds that representation choice matters: adding more structured inputs (e.g., textual bounding boxes and especially scene graphs) improves viewpoint stability compared with less structured visual inputs.
  • Results indicate that strong single-view spatial accuracy may overestimate robustness, because induced spatial representations can be unstable under counterfactual viewpoint reasoning.

Abstract

Multimodal large language models (MLLMs) achieve strong performance on single-view spatial reasoning tasks, yet it remains unclear whether they maintain stable spatial state representations under counterfactual viewpoint changes. We introduce a controlled diagnostic benchmark that evaluates relational consistency under hypothetical camera orbit transformations without re-rendering images. Across 100 synthetic scenes and 6,000 relational queries, we measure viewpoint consistency, 360{\deg} cycle agreement, and relational stability over sequential transformations. Despite high single-view accuracy, state-of-the-art MLLMs exhibit systematic degradation under counterfactual viewpoint changes, with frequent violations of cycle consistency and rapid decay in relational stability. We further evaluate multiple input representations, visual input, textual bounding boxes, and structured scene graphs, and show that increasing representational structure improves stability. Our results suggest that single-view spatial accuracy overestimates the robustness of induced spatial representations and that representation structure plays a critical role in counterfactual spatial reasoning.