Evolution and compression in LLMs: On the emergence of human-aligned categorization
arXiv cs.CL / 3/16/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether LLMs can develop human-aligned semantic categories by testing color naming and the Information Bottleneck tradeoff, comparing model outputs to human data.
- It finds that LLMs vary in IB-alignment and complexity with model size and instruction tuning, with larger instruction-tuned models showing better alignment and IB-efficiency.
- The authors introduce Iterated in-Context Language Learning (IICLL) to simulate cultural evolution of color naming in LLMs and observe that LLMs progressively restructure random systems toward greater IB-efficiency, similar to humans.
- Only the strongest in-context models (Gemini 2.0) reproduce the full range of near-optimal IB tradeoffs seen in humans, while other state-of-the-art models converge to low-complexity solutions.
- The results suggest human-aligned semantic categories can emerge in LLMs via the same cognitive principle that drives semantic efficiency in humans, linking AI semantics to human categorization theory.
Related Articles

The programming passion is melting
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA