MIRROR: A Hierarchical Benchmark for Metacognitive Calibration in Large Language Models
arXiv cs.LG / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces MIRROR, a benchmark with eight experiments across four metacognitive levels to test whether large language models (LLMs) can use self-knowledge to improve decision-making.
- Across roughly 250,000 evaluation instances covering 16 models from 8 labs, the authors find a consistent failure of compositional self-prediction on multi-domain tasks, with large Compositional Calibration Error ranges.
- While models show above-chance but imperfect domain-specific self-knowledge, they still systematically fail to convert that partial awareness into correct agentic action selection.
- External metacognitive control markedly reduces confident failures (from 0.600 to 0.143), whereas providing models with their own calibration scores yields no statistically significant improvement (p > 0.05), suggesting architectural constraints/scaffolding are key.
- The authors plan to publicly release the code, data, and Croissant metadata for the benchmark.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to