Application-Driven Pedagogical Knowledge Optimization of Open-Source LLMs via Reinforcement Learning and Supervised Fine-Tuning
arXiv cs.CL / 2026/4/9
📰 ニュースSignals & Early TrendsIdeas & Deep AnalysisModels & Research
要点
- The paper proposes a multi-stage training approach that combines reinforcement learning (RL) and supervised fine-tuning (SFT) to improve LLMs’ pedagogical knowledge for education-focused tasks.
- The RL stage uses techniques such as progressive difficulty training, emphasis on challenging examples, and extended reasoning rollouts, followed by an SFT stage that distills higher-quality data from the RL-trained model using difficulty-weighted sampling.
- An optional second RL round is described, creating an extensible pipeline for further pedagogical optimization.
- Using EduQwen 32B-RL1, EduQwen 32B-SFT, and EduQwen 32B-SFT-RL2 built on a dense Qwen3-32B backbone, the authors report new state-of-the-art results on pedagogical benchmarks (including the interactive Pedagogy Benchmark Leaderboard) and performance that surpasses larger proprietary systems like Gemini-3 Pro.
- The work argues that domain-specialized optimization can turn mid-sized, open-source LLMs into effective educational domain experts while maintaining transparency, customizability, and cost-efficiency.

