Beyond Behavior: Why AI Evaluation Needs a Cognitive Revolution

arXiv cs.AI / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that Turing’s 1950 behavioral framing of machine intelligence became an epistemological constraint, shaping what kinds of evidence AI can treat as valid for attributing intelligence.
  • It claims that decades of AI evaluation infrastructure has embedded output-only testing, making it difficult or impossible to ask questions about internal mechanisms, process, and internal organization.
  • Drawing an analogy to the psychology shift from behaviorism to cognitivism, the authors argue AI needs a comparable “cognitive revolution” in evaluation rather than abandoning behavioral metrics.
  • The core proposal is that behavioral evidence alone is insufficient to support the construct-level claims AI researchers want to make, especially when different computational processes can produce identical outputs.
  • The paper outlines what a post-behaviorist epistemology for AI would look like and what new, previously unaskable questions it would enable about intelligence attribution.

Abstract

In 1950, Alan Turing proposed replacing the question "Can machines think?" with a behavioral test: if a machine's outputs are indistinguishable from those of a thinking being, the question of whether it truly thinks can be set aside. This paper argues that Turing's move was not only a pragmatic simplification but also an epistemological commitment, a decision about what kind of evidence counts as relevant to intelligence attribution, and that this commitment has quietly constrained AI research for seven decades. We trace how Turing's behavioral epistemology became embedded in the field's evaluative infrastructure, rendering unaskable a class of questions about process, mechanism, and internal organization that cognitive psychology, neuroscience, and related disciplines learned to ask. We draw a structural parallel to the behaviorist-to-cognitivist transition in psychology: just as psychology's commitment to studying only observable behavior prevented it from asking productive questions about internal mental processes until that commitment was abandoned, AI's commitment to behavioral evaluation prevents it from distinguishing between systems that achieve identical outputs through fundamentally different computational processes, a distinction on which intelligence attribution depends. We argue that the field requires an epistemological transition comparable to the cognitive revolution: not an abandonment of behavioral evidence, but a recognition that behavioral evidence alone is insufficient for the construct claims the field wishes to make. We articulate what a post-behaviorist epistemology for AI would involve and identify the specific questions it would make askable that the field currently has no way to ask.