Entropy-Gradient Grounding: Training-Free Evidence Retrieval in Vision-Language Models

arXiv cs.CL / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “training-free evidence retrieval” for vision-language models by treating grounding as an iterative, test-time process of finding where to look next for ambiguous queries.
  • It introduces an entropy-gradient relevance map computed by backpropagating entropy of the model’s next-token distribution to visual token embeddings, avoiding auxiliary detectors or attention-map heuristics.
  • For multi-evidence (compositional) questions, the method extracts and ranks multiple coherent visual regions to assemble supporting evidence across different areas of an input.
  • An iterative zoom-and-reground strategy with a spatial-entropy stopping rule helps prevent over-refinement while improving localization quality.
  • Experiments on seven benchmarks across four VLM architectures show consistent gains over prior approaches, especially in detail-critical and high-resolution settings, and yield more interpretable evidence localizations.

Abstract

Despite rapid progress, pretrained vision-language models still struggle when answers depend on tiny visual details or on combining clues spread across multiple regions, as in documents and compositional queries. We address this by framing grounding as test-time evidence retrieval: given a query, the model should actively identify where to look next to resolve ambiguity. To this end, we propose a training-free, model-intrinsic grounding method that uses uncertainty as supervision. Specifically, we compute the entropy of the model's next-token distribution and backpropagate it to the visual token embeddings to obtain an entropy-gradient relevance map, without auxiliary detectors or attention-map heuristics. We then extract and rank multiple coherent regions to support multi-evidence queries, and introduce an iterative zoom-and-reground procedure with a spatial-entropy stopping rule to avoid over-refinement. Experiments on seven benchmarks across four VLM architectures demonstrate consistent improvements over existing methods, with the largest gains on detail-critical and high-resolution settings, while also producing more interpretable evidence localizations.