Towards Reward Modeling for AI Tutors in Math Mistake Remediation

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles the difficulty of evaluating AI tutor pedagogy, noting that common NLG metrics can’t reliably measure whether the model correctly finds mistakes, scaffolds reasoning, or withholds answers appropriately.
  • It introduces a reward-modeling approach for math mistake remediation by deriving a hierarchy of pedagogical aspects from human preferences on MRBench.
  • The authors synthesize minimally contrastive response pairs that isolate key improvement dimensions such as mistake identification/location, targetedness, scaffolding quality, actionability, clarity, and coherence.
  • They train Bradley–Terry preference models using automatically generated weighted-sum rankings from MRBench, synthetic pairs, and combined data sources.
  • Results show strong performance from synthetic-only training (0.69 pairwise accuracy), and further gains to 0.74 when adding targeted synthetic groupings, with the best system outperforming larger general-purpose reward models while using only a ~0.5B parameter backbone.

Abstract

Evaluating the pedagogical quality of AI tutors remains challenging: standard NLG metrics do not determine whether responses identify mistakes, scaffold reasoning, or avoid revealing the answers. For the task of mistake remediation, we derive a hierarchy of pedagogical aspects from human pairwise preferences on MRBench, and synthesize minimally contrastive response pairs that differ along key aspects (e.g., mistake identification and location, targetedness, scaffolding, actionability, clarity, and coherence). We develop and release Bradley-Terry preference models trained on weighted-sum rankings that we automatically create from MRBench, synthetic pairs, and data combinations. Using only synthetic data, our best model reaches 0.69 pairwise accuracy on a human preference test, and combining weighted-sum data with targeted synthetic groups improves accuracy to 0.74, outperforming larger general-purpose reward models while using only a 0.5B-parameter backbone.