How LLMs Might Think

arXiv cs.AI / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The article discusses an argument from rationality (by Daniel Stoljar and Zhihe Vincent Zhang) suggesting that large language models may not be capable of “thinking.”
  • It challenges that argument, arguing that even if LLMs do not think in a rational way, they could still perform arational, associative forms of cognition.
  • The authors’ positive thesis is that if LLMs think at all, their “thinking” would likely be purely associative rather than rule-based or deliberatively rational.
  • The piece is framed as a research announcement and conceptual analysis rather than a report of new model releases or experiments.

Abstract

Do large language models (LLMs) think? Daniel Stoljar and Zhihe Vincent Zhang have recently developed an argument from rationality for the claim that LLMs do not think. We contend, however, that the argument from rationality not only falters, but leaves open an intriguing possibility: that LLMs engage only in arational, associative forms of thinking, and have purely associative minds. Our positive claim is that if LLMs think at all, they likely think precisely in this manner.