Knowing When Not to Answer: Evaluating Abstention in Multimodal Reasoning Systems

arXiv cs.CL / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that effective abstention—detecting when evidence is insufficient and choosing not to answer—is essential for reliable multimodal reasoning systems but is largely missing from current vision-language and multi-agent evaluations.
  • It introduces MM-AQA, a new benchmark that generates unanswerable instances from answerable ones by varying visual dependency and evidence sufficiency to better reflect realistic failure modes.
  • Experiments on 2079 samples across three frontier VLMs and two multi-agent system architectures show that standard prompting leads to rare abstention, while confidence-based baselines work better than prompting alone.
  • Multi-agent systems increase abstention, but they also create an accuracy–abstention trade-off, and results suggest that miscalibration—not reasoning depth—is the key bottleneck.
  • The study concludes that models abstain appropriately when key image or text evidence is missing, yet they often still try to reconcile conflicting or degraded evidence, implying that abstention-aware training is needed.

Abstract

Effective abstention (EA), recognizing evidence insufficiency and refraining from answering, is critical for reliable multimodal systems. Yet existing evaluation paradigms for vision-language models (VLMs) and multi-agent systems (MAS) assume answerability, pushing models to always respond. Abstention has been studied in text-only settings but remains underexplored multimodally; current benchmarks either ignore unanswerability or rely on coarse methods that miss realistic failure modes. We introduce MM-AQA, a benchmark that constructs unanswerable instances from answerable ones via transformations along two axes: visual modality dependency and evidence sufficiency. Evaluating three frontier VLMs spanning closed and open-source models and two MAS architectures across 2079 samples, we find: (1) under standard prompting, VLMs rarely abstain; even simple confidence baselines outperform this setup, (2) MAS improves abstention but introduces an accuracy-abstention trade-off, (3) sequential designs match or exceed iterative variants, suggesting the bottleneck is miscalibration rather than reasoning depth, and (4) models abstain when image or text evidence is absent, but attempt reconciliation with degraded or contradictory evidence. Effective multimodal abstention requires abstention-aware training rather than better prompting or more agents.