The Presupposition Problem in Representation Genesis
arXiv cs.AI / 3/24/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that large language models uniquely raise the “representation genesis” question because we can’t clearly identify whether they underwent the transition from non-representing physical states to content-sensitive representational states.
- It claims existing philosophy-of-mind frameworks (e.g., Language of Thought, teleosemantics, predictive processing, enactivism, genetic phenomenology) share a structural flaw: they use concepts that only make explanatory sense if the system is already organized as a representer.
- This shared flaw is labeled the “Representation Presupposition structure,” which leads to “explanatory deferral” and a regress where accounts import representational resources from the very side they aim to explain.
- Rather than proposing a new mechanism, the paper provides a conceptual diagnosis and derives two minimum adequacy conditions that any account must satisfy to avoid the presupposition/regress pattern.
- The absence of a satisfying theory is presented as newly consequential in light of LLMs’ high cognitive-like performance despite the uncertainty about genesis.
Related Articles

Composer 2: What is new and Compares with Claude Opus 4.6 & GPT-5.4
Dev.to
How UCP Breaks Your E-Commerce Tracking Stack: A Platform-by-Platform Analysis
Dev.to
AI Text Analyzer vs Asking Friends: Which Gives Better Perspective?
Dev.to
[D] Cathie wood claims ai productivity wave is starting, data shows 43% of ceos save 8+ hours weekly
Reddit r/MachineLearning

Microsoft hires top AI researchers from Allen Institute for AI for Suleyman's Superintelligence team
THE DECODER