Linking Perception, Confidence and Accuracy in MLLMs
arXiv cs.CL / 3/13/2026
💬 OpinionModels & Research
Key Points
- The study identifies a severe confidence miscalibration problem in multi-modal LLMs, showing that improved perception does not guarantee reliable confidence estimates.
- It proposes Confidence-Driven Reinforcement Learning (CDRL), which uses original-noise image pairs and a confidence-based reward to enhance perceptual sensitivity and calibrate model confidence.
- It further introduces Confidence-Aware Test-Time Scaling (CA-TTS), which dynamically coordinates Self-Consistency, Self-Reflection, and Visual Self-Check modules guided by confidence signals.
- An Expert Model takes on multiple roles (Planner, Critic, Voter) to schedule these modules and provide external verification, enabling robust confidence management.
- The integrated framework achieves state-of-the-art results with consistent 8.8% gains across four benchmarks, supported by ablation studies and scaling advantages.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA