Spontaneous Functional Differentiation in Large Language Models: A Brain-Like Intelligence Economy

arXiv cs.AI / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • A new arXiv study reports that large language models spontaneously form “synergistic cores” where integrated information processing in middle layers exceeds what individual components produce.
  • The researchers use Integrated Information Decomposition across multiple model architectures, finding a pattern where middle layers show synergy while early and late layers are more redundant.
  • They describe the emergence of this layer organization as dynamic and resembling a physical phase transition as task difficulty increases.
  • Ablation experiments removing the synergistic components lead to catastrophic performance drops, which the authors interpret as evidence that these components underpin abstract reasoning and potentially bridge artificial and biological intelligence.

Abstract

The evolution of intelligence in artificial systems provides a unique opportunity to identify universal computational principles. Here we show that large language models spontaneously develop synergistic cores where information integration exceeds individual parts remarkably similar to the human brain. Using Integrated Information Decomposition across multiple architectures we find that middle layers exhibit synergistic processing while early and late layers rely on redundancy. This organization is dynamic and emerges as a physical phase transition as task difficulty increases. Crucially ablating synergistic components causes catastrophic performance loss confirming their role as the physical entity of abstract reasoning and bridging artificial and biological intelligence.