MC-CPO: Mastery-Conditioned Constrained Policy Optimization

arXiv cs.AI / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a key problem for engagement-optimized adaptive tutoring: reinforcement learning policies may favor short-term signals over long-term learning, creating structural incentives for reward hacking.
  • It models pedagogical safety as a constrained Markov decision process where admissible actions are dynamically restricted via mastery-conditioned feasibility tied to learner mastery and prerequisite structure.
  • The authors propose Mastery-Conditioned Constrained Policy Optimization (MC-CPO), a two-timescale primal-dual method that combines structural action masking with constrained policy optimization.
  • In tabular settings, the work proves feasibility preservation and convergence to stationary feasible points, and shows optimization within the mastery-conditioned feasible set can outperform post-hoc filtering under the same safety budget.
  • Experiments in tabular and neural tutoring environments (10 seeds, up to one million neural training steps) show constraint satisfaction within tolerance, reduced discounted safety costs, and a substantial drop in the Reward Hacking Severity Index (RHSI).

Abstract

Engagement-optimized adaptive tutoring systems may prioritize short-term behavioral signals over sustained learning outcomes, creating structural incentives for reward hacking in reinforcement learning policies. We formalize this challenge as a constrained Markov decision process (CMDP) with mastery-conditioned feasibility, in which pedagogical safety constraints dynamically restrict admissible actions according to learner mastery and prerequisite structure. We introduce Mastery-Conditioned Constrained Policy Optimization (MC-CPO), a two-timescale primal-dual algorithm that integrates structural action masking with constrained policy optimization. In the tabular regime, we establish feasibility preservation and convergence to stationary feasible points under standard stochastic approximation conditions and derive a safety gap result showing that optimization within the mastery-conditioned feasible set can strictly dominate post-hoc filtering under identical safety budgets. Empirical validation is conducted in minimal and extended tabular environments and in a neural tutoring setting. Across 10 random seeds and one million training steps in the neural regime, MC-CPO satisfies constraint budgets within tolerance, reduces discounted safety costs relative to unconstrained and reward-shaped baselines, and substantially lowers the Reward Hacking Severity Index (RHSI). These results indicate that embedding pedagogical structure directly into the feasible action space provides a principled foundation for mitigating reward hacking in instructional reinforcement learning systems.