WildTableBench: Benchmarking Multimodal Foundation Models on Table Understanding In the Wild

arXiv cs.CV / 5/5/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces WildTableBench, a new question-answering benchmark specifically for “in-the-wild” table images from real-world sources rather than clean or strictly structured inputs.
  • WildTableBench includes 402 high-information-density table images and 928 manually annotated, verified questions across 17 subtypes and five categories.
  • The benchmark evaluates 21 multimodal foundation models (both proprietary and open-source), finding that only one model achieves over 50% accuracy while most models perform poorly (4.1%–49.9%).
  • Diagnostic analyses indicate that many failures stem from persistent weaknesses in structural perception and numerical reasoning when tables have varied layouts and domain-specific complexity.
  • Overall, the study positions WildTableBench as a valuable diagnostic tool to better understand current multimodal model capabilities for table understanding.

Abstract

Using multimodal foundation models to analyze table images is a high-value yet challenging application in consumer and enterprise scenarios. Despite its importance, current evaluations rely largely on structured-text tables or clean rendered images, leaving the visual complexity of in-the-wild table images underexplored. Such images feature varied layouts and diverse domains that demand sophisticated structural perception and numerical reasoning. To bridge this gap, we introduce WildTableBench, the first question-answering benchmark for naturally occurring table images from real-world settings. WildTableBench comprises 402 high-information-density table images collected from online forums and websites across diverse domains, together with 928 manually annotated and verified questions spanning 17 subtypes across five categories. We evaluate 21 frontier proprietary and open-source multimodal foundation models on this benchmark. Only one model exceeds 50% accuracy, while all remaining models range from 4.1% to 49.9%. We further conduct diagnostic analyses to characterize model failures and reveal persistent weaknesses in structural perception and reasoning. These results and analyses provide useful insights into current model capabilities and establish WildTableBench as a valuable diagnostic benchmark for table image understanding.