Task Ecologies and the Evolution of World-Tracking Representations in Large Language Models

arXiv stat.ML / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • 論文は、自己回帰による次トークン学習が「世界(潜在状態)を追跡する表現」をいつ選好するのかを、言語モデルを“進化するモデル生物”として分析する枠組みを提示している。
  • 次トークンの交差エントロピーを、不可避の条件付きエントロピーと、Jensen–Shannonの超過項に分解し、この超過項が消えるのは訓練生態系の同値類を表現が保持している場合に限られると述べている。
  • この結果から、言語モデルにおける「生態学的な真実性(ecological veridicality)」の定量的定義と、最小複雑度で超過ゼロを達成する解が“訓練同値による商分割(quotient partition)”になることを導く。
  • Transformerについては、凍結Dense/凍結MoEでは固定符号化の解析が成り立つ一方、in-context learningは分離集合を広げず、タスクごとの適応は前提を崩すことを示している。

Abstract

We study language models as evolving model organisms and ask when autoregressive next-token learning selects for world-tracking representations. For any encoding of latent world states, the Bayes-optimal next-token cross-entropy decomposes into the irreducible conditional entropy plus a Jensen--Shannon excess term. That excess vanishes if and only if the encoding preserves the training ecology's equivalence classes. This yields a precise notion of ecological veridicality for language models and identifies the minimum-complexity zero-excess solution as the quotient partition by training equivalence. We then determine when this fixed-encoding analysis applies to transformer families: frozen dense and frozen Mixture-of-Experts transformers satisfy it, in-context learning does not enlarge the model's separation set, and per-task adaptation breaks the premise. The framework predicts two characteristic failure modes: simplicity pressure preferentially removes low-gain distinctions, and training-optimal models can still incur positive excess on deployment ecologies that refine the training ecology. A conditional dynamic extension shows how inter-model selection and post-training can recover such gap distinctions under explicit heredity, variation, and selection assumptions. Exact finite-ecology checks and controlled microgpt experiments validate the static decomposition, split-merge threshold, off-ecology failure pattern, and two-ecology rescue mechanism in a regime where the relevant quantities are directly observable. The goal is not to model frontier systems at scale, but to use small language models as laboratory organisms for theory about representational selection.