TableVision: A Large-Scale Benchmark for Spatially Grounded Reasoning over Complex Hierarchical Tables

arXiv cs.AI / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a “Perception Bottleneck” in multimodal large language models when performing spatially grounded reasoning on complex hierarchical tables, where the number of discrete visual regions grows faster than task complexity.
  • It introduces TableVision, a large-scale, trajectory-aware benchmark that provides pixel-perfect spatial grounding for multi-step logical deductions in hierarchical table layouts.
  • TableVision categorizes tasks into three cognitive levels (Perception, Reasoning, Analysis) across 13 sub-categories and includes 6,799 high-fidelity reasoning trajectories.
  • Experiments with diagnostic probing show that adding explicit spatial constraints improves spatial attention and restores reasoning performance for MLLMs.
  • A two-stage decoupled framework yields a reported 12.3% overall accuracy improvement on the test set, positioning TableVision as a testbed for perception–logic synergy in document understanding.

Abstract

Structured tables are essential for conveying high-density information in professional domains such as finance, healthcare, and scientific research. Despite the progress in Multimodal Large Language Models (MLLMs), reasoning performance remains limited for complex tables with hierarchical layouts. In this paper, we identify a critical Perception Bottleneck through quantitative analysis. We find that as task complexity scales, the number of involved discrete visual regions increases disproportionately. This processing density leads to an internal "Perceptual Overload," where MLLMs struggle to maintain accurate spatial attention during implicit generation. To address this bottleneck, we introduce TableVision, a large-scale, trajectory-aware benchmark designed for spatially grounded reasoning. TableVision stratifies tabular tasks into three cognitive levels (Perception, Reasoning, and Analysis) across 13 sub-categories. By utilizing a rendering-based deterministic grounding pipeline, the dataset explicitly couples multi-step logical deductions with pixel-perfect spatial ground truths, comprising 6,799 high-fidelity reasoning trajectories. Our empirical results, supported by diagnostic probing, demonstrate that explicit spatial constraints significantly recover the reasoning potential of MLLMs. Furthermore, our two-stage decoupled framework achieves a robust 12.3% overall accuracy improvement on the test set. TableVision provides a rigorous testbed and a fresh perspective on the synergy between perception and logic in document understanding.