Me, Myself, and $\pi$ : Evaluating and Explaining LLM Introspection

arXiv cs.AI / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper tackles the problem that LLM “introspection” evaluations may conflate true meta-cognition with generic knowledge or text-based self-simulation, and proposes a taxonomy to make introspection components distinguishable.
  • It formalizes introspection as latent computation of specific operators over a model’s policy and parameters, aiming to ground introspection in mechanism rather than surface-level behavior.
  • The authors introduce Introspect-Bench, a multifaceted evaluation suite intended to rigorously measure introspection capabilities in a more controlled way.
  • Experiments suggest frontier models have better access to their own policies, improving performance on predicting their own behavior compared with peer models.
  • The work includes causal/mechanistic evidence for how introspection can emerge without explicit training, attributing part of the mechanism to “attention diffusion.”

Abstract

A hallmark of human intelligence is Introspection-the ability to assess and reason about one's own cognitive processes. Introspection has emerged as a promising but contested capability in large language models (LLMs). However, current evaluations often fail to distinguish genuine meta-cognition from the mere application of general world knowledge or text-based self-simulation. In this work, we propose a principled taxonomy that formalizes introspection as the latent computation of specific operators over a model's policy and parameters. To isolate the components of generalized introspection, we present Introspect-Bench, a multifaceted evaluation suite designed for rigorous capability testing. Our results show that frontier models exhibit privileged access to their own policies, outperforming peer models in predicting their own behavior. Furthermore, we provide causal, mechanistic evidence explaining both how LLMs learn to introspect without explicit training, and how the mechanism of introspection emerges via attention diffusion.