Semantic Structure of Feature Space in Large Language Models
arXiv cs.CL / 5/1/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper reports that the geometric relationships among semantic features in large language model hidden states closely align with human psychological associations.
- It constructs feature vectors for 360 words and projects them onto 32 semantic axes (e.g., beautiful–ugly, soft–hard), finding strong correlations with human ratings on the corresponding semantic scales.
- The authors show that cosine similarities between semantic axes predict how strongly the corresponding scales correlate in human surveys.
- They further find that variance across the 32 semantic axes concentrates in a low-dimensional subspace, and that manipulating a word along one axis produces predictable spillover changes along other axes based on cosine similarity.
- Overall, the results argue that LLM features should be analyzed not only in isolation, but also via their geometry, inter-axis relations, and the low-dimensional subspaces they form.
Related Articles

The foundational UK sovereign-AI patents are filed. The collaboration door is open.
Dev.to

Building a Shopify app with Claude Code — spec-driven development and pricing design
Dev.to

The AI Habit That Pays Dividends (And Takes Zero Extra Time)
Dev.to

From Chaos to Clarity: AI-Powered Client Portals for Designers
Dev.to

I Used to Treat AI Like a Search Engine. Then I Realized I Was Doing It Wrong.
Dev.to