What do Language Models Learn and When? The Implicit Curriculum Hypothesis

arXiv cs.CL / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes the Implicit Curriculum Hypothesis, claiming that LLM pretraining follows a compositional and predictable sequence of skill acquisition across models and data mixtures.
  • It evaluates this idea using a suite of simple, composable diagnostic tasks (e.g., retrieval, morphological transformations, coreference, logic, and math) to identify “emergence points” when models reach fixed accuracy thresholds.
  • Across four model families (410M–13B parameters), the ordering of when skills emerge is highly consistent between model pairs (reported Spearman correlation ρ = 0.81 across 45 pairs).
  • The authors find that composite abilities typically emerge after their component tasks, and they further show this structure is reflected in learned representations (similar task “function vector” representations correlate with similar training trajectories).
  • Using representation-derived signals from the task set, the paper predicts training trajectories for held-out compositional tasks with strong fit (R² = 0.68–0.84) without directly evaluating those tasks during training.

Abstract

Large language models (LLMs) can perform remarkably complex tasks, yet the fine-grained details of how these capabilities emerge during pretraining remain poorly understood. Scaling laws on validation loss tell us how much a model improves with additional compute, but not what skills it acquires in which order. To remedy this, we propose the Implicit Curriculum Hypothesis: pretraining follows a compositional and predictable curriculum across models and data mixtures. We test this by designing a suite of simple, composable tasks spanning retrieval, morphological transformations, coreference, logical reasoning, and mathematics. Using these tasks, we track emergence points across four model families spanning sizes from 410M-13B parameters. We find that emergence orderings of when models reach fixed accuracy thresholds are strikingly consistent (\rho = .81 across 45 model pairs), and that composite tasks most often emerge after their component tasks. Furthermore, we find that this structure is encoded in model representations: tasks with similar function vector representations also tend to follow similar trajectories in training. By using the space of representations derived from our task set, we can effectively predict the training trajectories of simple held-out compositional tasks throughout the course of pretraining (R^2 = .68-.84 across models) without previously evaluating them. Together, these results suggest that pretraining is more structured than loss curves reveal: skills emerge in a compositional order that is consistent across models and readable from their internals.