LLM Neuroanatomy III - LLMs seem to think in geometry, not language

Reddit r/LocalLLaMA / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • A revised “LLM Neuroanatomy III” article reports experiments showing that LLMs organize concepts in internal vector space more by subject matter than by the language used.
  • The author expands testing from 2 to 8 languages (EN, ZH, AR, RU, JA, KO, HI, FR) across multiple models, finding that middle-layer representations make a photosynthesis prompt in Hindi closer to Japanese than to unrelated Hindi (like cooking).
  • A harder follow-up compares English descriptions, Python function formulations (with constrained variable naming), and LaTeX equations for the same physics concept (e.g., ½mv²), which converge toward the same region of the model’s internal representations.
  • Results are reported as consistent across several dense and MoE transformer architectures from different organizations, suggesting a convergent internal representation rather than a model-specific or training-artifact effect.
  • The post argues against a Sapir–Whorf-style framing for these models (language as what shapes thought) while aligning more with a Chomsky-like idea of shared deep structure for concepts.
LLM Neuroanatomy III - LLMs seem to think in geometry, not language

Hi Reddit!

Last month I posted the third part of my series of article on LLM Neuroanatomy just before I left to go on holiday 🏝️. Unfortunately, is was a bit 'sloppy', as I didn't have time to add polish, so I took the article down and deleted the Reddit post.

Over the weekend, I have revised the article, and added in the results for Gemma-4 31B! I'm also wrapping up the Gemma-4-31B-RYS (the analysis will run overnight), and will release Qwen3.6-35B-RYS this week too.

OK, if you have been following the series, you know how in part II, I said LLMs seem to think in a universal language? That was with a tiny experiment, comparing Chinese to English. This time I went deeper.

TL;DR TL;DR:

Using an intersting new technique, you can see how LLMs organise concepts as vectors. With the cool trick of comparing several concepts in several languages, we can see where in the transformer stack the LLM is 'thinking' in terms of either the language it is read/writing or what the actual topic is.

The Sapir-Whorf hypothesis is simply that language shapes what you can and cant think. The data in the blog shows that language (for LLMs, I'm making now claims about people), is just the I/O, and the thinking occurs in the middles layers are vectors about concepts.

TL;DR for those who (I know) won't read the blog:

  1. I expanded the experiment from 2 languages to 8 (EN, ZH, AR, RU, JA, KO, HI, FR) across 4 different models (Qwen3.5-27B, MiniMax M2.5, GLM-4.7, GPT-OSS-120B and Gemma-4 31B). All five show the same thing. In the middle layers, a sentence about photosynthesis in Hindi is closer to photosynthesis in Japanese than it is to cooking in Hindi. Language identity basically vanishes!
  2. Then I did the harder test: English descriptions, Python functions (single-letter variables only, no cheating by calling the variable 'velocity'), and LaTeX equations for the same concepts. ½mv², 0.5 * m * v ** 2, and "half the mass times velocity squared" start to converge to the same region in the model's internal space.
  3. This replicates across dense transformers and MoE architectures from five different orgs. Not a Qwen thing. Not a training artifact, but what seems to be a convergent solution.
  4. The post connects this to Sapir-Whorf (language shapes thought → nope, not in these models) and Chomsky (universal deep structure → yes, but it's geometry not grammar). If you're into that kind of nerdy thing, you might like the discussion...

Blog with interactive PCA visualisations you can actually play with: https://dnhkng.github.io/posts/sapir-whorf/

Code and data: https://github.com/dnhkng/RYS

On the RYS front — still talking with TurboDerp about the ExLlamaV3 pointer-based format for zero-VRAM-overhead layer duplication. No ETA but it's happening.

Again, play with the Widget! its really cool, I promise!

submitted by /u/Reddactor
[link] [comments]