AI Navigate

Evolution and compression in LLMs: On the emergence of human-aligned categorization

arXiv cs.CL / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether LLMs can develop human-aligned semantic categories by testing color naming and the Information Bottleneck tradeoff, comparing model outputs to human data.
  • It finds that LLMs vary in IB-alignment and complexity with model size and instruction tuning, with larger instruction-tuned models showing better alignment and IB-efficiency.
  • The authors introduce Iterated in-Context Language Learning (IICLL) to simulate cultural evolution of color naming in LLMs and observe that LLMs progressively restructure random systems toward greater IB-efficiency, similar to humans.
  • Only the strongest in-context models (Gemini 2.0) reproduce the full range of near-optimal IB tradeoffs seen in humans, while other state-of-the-art models converge to low-complexity solutions.
  • The results suggest human-aligned semantic categories can emerge in LLMs via the same cognitive principle that drives semantic efficiency in humans, linking AI semantics to human categorization theory.

Abstract

Converging evidence suggests that human systems of semantic categories achieve near-optimal compression via the Information Bottleneck (IB) complexity-accuracy tradeoff. Large language models (LLMs) are not trained for this objective, which raises the question: are LLMs capable of evolving efficient human-aligned semantic systems? To address this question, we focus on color categorization -- a key testbed of cognitive theories of categorization with uniquely rich human data -- and replicate with LLMs two influential human studies. First, we conduct an English color-naming study, showing that LLMs vary widely in their complexity and English-alignment, with larger instruction-tuned models achieving better alignment and IB-efficiency. Second, to test whether these LLMs simply mimic patterns in their training data or actually exhibit a human-like inductive bias toward IB-efficiency, we simulate cultural evolution of pseudo color-naming systems in LLMs via a method we refer to as Iterated in-Context Language Learning (IICLL). We find that akin to humans, LLMs iteratively restructure initially random systems towards greater IB-efficiency. However, only a model with strongest in-context capabilities (Gemini 2.0) is able to recapitulate the wide range of near-optimal IB-tradeoffs observed in humans, while other state-of-the-art models converge to low-complexity solutions. These findings demonstrate how human-aligned semantic categories can emerge in LLMs via the same fundamental principle that underlies semantic efficiency in humans.