How LLMs Might Think
arXiv cs.AI / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The article discusses an argument from rationality (by Daniel Stoljar and Zhihe Vincent Zhang) suggesting that large language models may not be capable of “thinking.”
- It challenges that argument, arguing that even if LLMs do not think in a rational way, they could still perform arational, associative forms of cognition.
- The authors’ positive thesis is that if LLMs think at all, their “thinking” would likely be purely associative rather than rule-based or deliberatively rational.
- The piece is framed as a research announcement and conceptual analysis rather than a report of new model releases or experiments.
Related Articles

Black Hat Asia
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Don't forget, there is more than forgetting: new metrics for Continual Learning
Dev.to

Microsoft MAI-Image-2-Efficient Review 2026: The AI Image Model Built for Production Scale
Dev.to
Bit of a strange question?
Reddit r/artificial