MEDLEY-BENCH: Scale Buys Evaluation but Not Control in AI Metacognition

arXiv cs.AI / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MEDLEY-BENCH, a benchmark for behavioural metacognition that explicitly disentangles independent reasoning, private self-revision, and socially influenced revision under real disagreements between models.
  • MEDLEY-BENCH evaluates 35 models from 12 model families on 130 ambiguous cases across five domains and reports two complementary metrics: MMS (reflective updating, social robustness, epistemic articulation) and MAS (four metacognitive sub-skills).
  • Results reveal a strong dissociation between evaluation and control: evaluation ability tends to increase with model size within families, while control over revision does not show the same scaling pattern.
  • A progressive adversarial analysis identifies two revision profiles—models that revise mainly based on argument quality versus models that revise according to consensus statistics—while ipsative scoring shows evaluation is the weakest relative ability across all models.
  • The findings suggest a systematic “knowing/doing gap” in metacognition and indicate that smaller, cheaper models can match or outperform larger ones, implying metacognitive competence is not purely a function of scale.

Abstract

Metacognition, the ability to monitor and regulate one's own reasoning, remains under-evaluated in AI benchmarking. We introduce MEDLEY-BENCH, a benchmark of behavioural metacognition that separates independent reasoning, private self-revision, and socially influenced revision under genuine inter-model disagreement. The benchmark evaluates 35 models from 12 families on 130 ambiguous instances across five domains and reports two complementary scores: the Medley Metacognition Score (MMS), a tier-based aggregate of reflective updating, social robustness, and epistemic articulation, and the Medley Ability Score (MAS), derived from four metacognitive sub-abilities. Results show a robust evaluation/control dissociation: evaluation ability increases with model size within families, whereas control does not. In a follow-up progressive adversarial analysis of 11 models, we observed two behavioural profiles, i.e., models that revise primarily in response to argument quality and models that track consensus statistics. Under within-model relative profiling (ipsative scoring), evaluation was the weakest relative ability in all 35 models, indicating a systematic knowing/doing gap. Smaller and cheaper models often matched or outperformed larger counterparts, suggesting that metacognitive competence is not simply a function of scale. These findings position MEDLEY-BENCH as a tool for measuring belief revision under social pressure and suggest that future training should reward calibrated, proportional updating rather than output quality alone.