Active Reasoning Vision-Language Models via Sequential Experimental Design

arXiv cs.CV / 5/5/2026

📰 NewsModels & Research

Key Points

  • The paper argues that vision-language models face a “perceptual bandwidth bottleneck,” where wide field-of-view trade-offs reduce fine details needed for complex reasoning.
  • It reformulates overcoming this limitation as a sequential decision-making problem using sequential Bayesian optimal experimental design (S-BOED), balancing spatial coverage with resolution.
  • Because exact Bayesian inference is intractable for continuous gigapixel image spaces, the authors derive tractable approximations to make the approach practical.
  • They propose a training-free inference strategy that instantiates the S-BOED objective for agents using multiple vision tools, supporting greedy sampling to look-ahead planning.
  • Experiments on gigapixel-level benchmarks show improved performance over state-of-the-art baselines and a reduced gap toward human-annotated oracle performance.

Abstract

Visual perception in modern Vision-Language Models (VLMs) is constrained by a fundamental perceptual bandwidth bottleneck: a broad field of view inevitably sacrifices the fine-grained details necessary for complex reasoning. Inspired by the classical paradigms of active vision and information foraging, we frame overcoming this limitation as a sequential decision-making process. We formalise this process through the lens of the sequential Bayesian optimal experimental design (S-BOED) problem. While exact Bayesian inference is intractable in continuous gigapixel spaces, we derive principled yet tractable approximations that balance spatial coverage against resolution. To validate this framework, we present a training-free inference strategy as a practical instantiation of the S-BOED objective for agents equipped with multiple vision tools. Designed as a flexible template, this strategy accommodates arbitrary optimisation algorithms, ranging from efficient greedy sampling to look-ahead planning, to approximate the optimal design. Empirical evaluations on gigapixel-level benchmarks demonstrate that our approach further boosts the performance of state-of-the-art models, significantly outperforming standard baselines and effectively narrowing the gap towards human-annotated oracles.