Level Up: Defining and Exploiting Transitional Problems for Curriculum Learning

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a method to measure the difficulty of individual problem instances relative to a model's current ability, enabling learner-specific curricula for curriculum learning.
  • It identifies transitional problems that stay easier as model ability grows, enabling a progressively leveled training progression.
  • Experiments on chess and mathematics show that curricula that level up from easy to hard transitional problems improve a model more efficiently than other strategies.
  • The approach yields interpretable problem selection and provides a principled basis for step-by-step improvement in ML training.

Abstract

Curriculum learning--ordering training examples in a sequence to aid machine learning--takes inspiration from human learning, but has not gained widespread acceptance. Static strategies for scoring item difficulty rely on indirect proxy scores of varying quality and produce curricula that are not specific to the learner at hand. Dynamic approaches base difficulty estimates on gradient information, requiring considerable extra computation during training. We introduce a novel method for measuring the difficulty of individual problem instances directly relative to the ability of a given model, and identify transitional problems that are consistently easier as model ability increases. Applying this method to chess and mathematics, we find that training on a curriculum that "levels up" from easier to harder transitional problems most efficiently improves a model to the next tier of competence. These problems induce a natural progression from easier to harder items, which outperforms other training strategies. By measuring difficulty directly relative to model competence, our method yields interpretable problems, learner-specific curricula, and a principled basis for step-by-step improvement.