Can Large Language Models Simulate Human Cognition Beyond Behavioral Imitation?

arXiv cs.CL / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether large language models can simulate aspects of human cognition rather than only imitating observable behavior, addressing limitations of existing datasets that use synthetic traces or aggregated population data.
  • It introduces a benchmark based on longitudinal publication histories of 217 AI researchers, treating each author’s work as an external proxy for individual cognitive processes.
  • To test whether LLMs transfer cognitive patterns, the benchmark uses a cross-domain, temporal-shift generalization setup rather than standard within-domain evaluation.
  • The authors propose a multidimensional cognitive alignment metric to measure individual-level cognitive consistency and run systematic evaluations of state-of-the-art LLMs plus enhancement techniques.
  • The study is positioned as an initial empirical step answering how well current LLMs simulate human cognition and how much existing methods can improve those abilities.

Abstract

An essential problem in artificial intelligence is whether LLMs can simulate human cognition or merely imitate surface-level behaviors, while existing datasets suffer from either synthetic reasoning traces or population-level aggregation, failing to capture authentic individual cognitive patterns. We introduce a benchmark grounded in the longitudinal research trajectories of 217 researchers across diverse domains of artificial intelligence, where each author's scientific publications serve as an externalized representation of their cognitive processes. To distinguish whether LLMs transfer cognitive patterns or merely imitate behaviors, our benchmark deliberately employs a cross-domain, temporal-shift generalization setting. A multidimensional cognitive alignment metric is further proposed to assess individual-level cognitive consistency. Through systematic evaluation of state-of-the-art LLMs and various enhancement techniques, we provide a first-stage empirical study on the questions: (1) How well do current LLMs simulate human cognition? and (2) How far can existing techniques enhance these capabilities?