MEDLEY-BENCH: Scale Buys Evaluation but Not Control in AI Metacognition
arXiv cs.AI / 4/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces MEDLEY-BENCH, a benchmark for behavioural metacognition that explicitly disentangles independent reasoning, private self-revision, and socially influenced revision under real disagreements between models.
- MEDLEY-BENCH evaluates 35 models from 12 model families on 130 ambiguous cases across five domains and reports two complementary metrics: MMS (reflective updating, social robustness, epistemic articulation) and MAS (four metacognitive sub-skills).
- Results reveal a strong dissociation between evaluation and control: evaluation ability tends to increase with model size within families, while control over revision does not show the same scaling pattern.
- A progressive adversarial analysis identifies two revision profiles—models that revise mainly based on argument quality versus models that revise according to consensus statistics—while ipsative scoring shows evaluation is the weakest relative ability across all models.
- The findings suggest a systematic “knowing/doing gap” in metacognition and indicate that smaller, cheaper models can match or outperform larger ones, implying metacognitive competence is not purely a function of scale.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles
Awesome Open-Weight Models: The Practitioner's Guide to Open-Source LLMs (2026 Edition) [P]
Reddit r/MachineLearning

The Mythos vs GPT-5.4-Cyber debate is missing the benchmark
Dev.to

Beyond the Crop: Automating "Ghost Mannequin" Effects with Depth-Aware Inpainting
Dev.to

The $20/month AI subscription is gaslighting developers in emerging markets
Dev.to

A Claude Code hook that warns you before calling a low-trust MCP server
Dev.to