Representation in large language models
arXiv cs.CL / 5/4/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines a core theoretical dispute about what mechanisms drive large language model (LLM) behavior: representation-based information processing versus memorization and stochastic lookup.
- It frames the question as identifying what kind of algorithm LLMs implement, arguing that both representation-based processing and memorization-style behavior likely contribute.
- The author discusses how the mechanism question has downstream consequences for philosophical and cognitive-science issues, such as whether LLMs could be said to have beliefs, intentions, concepts, knowledge, or understanding.
- The paper proposes and defends practical techniques to investigate the internal representations in LLMs and use those findings to build explanations grounded in observed structure.
- The work aims to unblock broader theorizing by providing a foundation and tools for future research on language models and their successors.
Related Articles

When Claims Freeze Because a Provider Record Drifted: The Case for Enrollment Repair Agents
Dev.to

The Cash Is Already Earned: Why Construction Pay Application Exceptions Fit an Agent Better Than SaaS
Dev.to

Why Ship-and-Debit Claim Recovery Is a Better Agent Wedge Than Another “AI Back Office” Tool
Dev.to
AI is getting better at doing things, but still bad at deciding what to do?
Reddit r/artificial

I Built an AI-Powered Chinese BaZi (八字) Fortune Teller — Here's What DeepSeek Revealed About Destiny
Dev.to