Test-Time Matching: Unlocking Compositional Reasoning in Multimodal Models

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current benchmark metrics for multimodal models can systematically underestimate performance on compositional reasoning tasks, sometimes leaving models at or below random-chance levels.
  • It introduces a group matching score to better reflect true capability, and shows that achieving correctness under this new metric can be converted to correctness under existing metrics via a simple overfitting step.
  • Using this insight, the authors propose Test-Time Matching (TTM), an iterative self-improving algorithm that boosts multimodal model performance without any external supervision.
  • Experiments report new best results, including SigLIP-B16 outperforming previous systems and GPT-4.1 exceeding estimated human performance on Winoground, plus further gains on MMVP-VLM and generative multimodal models.
  • TTM is reported to provide consistent improvements across 16 dataset variants, with relative gains up to 85.7% on challenging benchmarks like WhatsUp, even when metric artifacts or group structures are absent.

Abstract

Frontier AI models have achieved remarkable progress, yet recent studies suggest they struggle with compositional reasoning, often performing at or below random chance on established benchmarks. We revisit this problem and show that widely used evaluation metrics systematically underestimate model capability. To correct this artifact, we introduce a group matching score that more faithfully evaluates model capability. Moreover, correctness under the new metric can be translated into correctness under existing metrics via a simple overfitting step. This adjustment enables SigLIP-B16 to surpass all previous results and GPT-4.1 to yield the first result surpassing estimated human performance on Winoground. Building on this insight, we propose Test-Time Matching (TTM), an iterative, self-improving algorithm that further bootstraps model performance without any external supervision. TTM delivers additional, non-trivial improvements: for example, TTM enables SigLIP-B16 to surpass GPT-4.1 on MMVP-VLM, establishing a new state of the art. TTM also extends beyond contrastive vision-language models, yielding clear gains on a generative multimodal model across benchmarks. Importantly, TTM remains broadly effective even on benchmarks without metric-induced effects or group structures, achieving relative gains up to 85.7% on challenging datasets such as WhatsUp. Across 16 dataset variants spanning diverse setups, our experiments demonstrate that TTM consistently improves model performance and advances the frontier of compositional reasoning.