AI Navigate

Linking Perception, Confidence and Accuracy in MLLMs

arXiv cs.CL / 3/13/2026

💬 OpinionModels & Research

Key Points

  • The study identifies a severe confidence miscalibration problem in multi-modal LLMs, showing that improved perception does not guarantee reliable confidence estimates.
  • It proposes Confidence-Driven Reinforcement Learning (CDRL), which uses original-noise image pairs and a confidence-based reward to enhance perceptual sensitivity and calibrate model confidence.
  • It further introduces Confidence-Aware Test-Time Scaling (CA-TTS), which dynamically coordinates Self-Consistency, Self-Reflection, and Visual Self-Check modules guided by confidence signals.
  • An Expert Model takes on multiple roles (Planner, Critic, Voter) to schedule these modules and provide external verification, enabling robust confidence management.
  • The integrated framework achieves state-of-the-art results with consistent 8.8% gains across four benchmarks, supported by ablation studies and scaling advantages.

Abstract

Recent advances in Multi-modal Large Language Models (MLLMs) have predominantly focused on enhancing visual perception to improve accuracy. However, a critical question remains unexplored: Do models know when they do not know? Through a probing experiment, we reveal a severe confidence miscalibration problem in MLLMs. To address this, we propose Confidence-Driven Reinforcement Learning (CDRL), which uses original-noise image pairs and a novel confidence-based reward to enhance perceptual sensitivity and robustly calibrate the model's confidence. Beyond training benefits, calibrated confidence enables more effective test-time scaling as a free lunch. We further propose Confidence-Aware Test-Time Scaling (CA-TTS), which dynamically coordinates Self-Consistency, Self-Reflection, and Visual Self-Check modules guided by confidence signals. An Expert Model acts in multiple roles (e.g., Planner, Critic, Voter) to schedule these modules and provide external verification. Our integrated framework establishes new state-of-the-art results with consistent 8.8% gains across four benchmarks. More ablation studies demonstrate the effectiveness of each module and scaling superiority.