Measuring the metacognition of AI

arXiv cs.AI / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that as AI systems are used in high-stakes decision workflows, measuring their metacognitive capabilities—how well they assess the reliability of their own outputs—becomes essential.
  • It proposes the meta-d' framework (and model-free alternatives) as a gold-standard approach for evaluating metacognitive sensitivity via how effectively confidence ratings separate correct from incorrect answers.
  • It extends measurement using signal detection theory (SDT) to quantify whether AI models can spontaneously regulate decisions under uncertainty and varying levels of risk.
  • The authors validate the methodology with experiments on three LLMs (GPT-5, DeepSeek-V3.2-Exp, and Mistral-Medium-2508) using two experimental designs: confidence-rating after judgment, and risk-manipulated judgment without explicit confidence.
  • The results support using meta-d' for comparisons across model optimality, among models on the same tasks, and within the same model across tasks, while SDT can test whether models become more conservative as risk increases.

Abstract

A robust decision-making process must take into account uncertainty, especially when the choice involves inherent risks. Because artificial Intelligence (AI) systems are increasingly integrated into decision-making workflows, managing uncertainty relies more and more on the metacognitive capabilities of these systems; i.e, their ability to assess the reliability of and regulate their own decisions. Hence, it is crucial to employ robust methods to measure the metacognitive abilities of AI. This paper is primarily a methodological contribution arguing for the adoption of the meta-d' framework, or its model-free alternatives, as the gold standard for assessing the metacognitive sensitivity of AIs--the ability to generate confidence ratings that distinguish correct from incorrect responses. Moreover, we propose to leverage signal detection theory (SDT) to measure the ability of AIs to spontaneously regulate their decisions based on uncertainty and risk. To demonstrate the practical utility of these psychophysical frameworks, we conduct two series of experiments on three large language models (LLMs)--GPT-5, DeepSeek-V3.2-Exp, and Mistral-Medium-2508. In the first experiments, LLMs performed a primary judgment followed by a confidence rating. In the second, LLMs only performed the primary judgment, while we manipulated the risk associated with either response. On the one hand, applying the meta-d' framework allows us to conduct comparisons along three axes: comparing an LLM to optimality, comparing different LLMs on a given task, and comparing the same LLM across different tasks. On the other hand, SDT allows us to assess whether LLMs become more conservative when risks are high.