Limits of Difficulty Scaling: Hard Samples Yield Diminishing Returns in GRPO-Tuned SLMs
arXiv cs.LG / 4/9/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper tests whether preference optimization (GRPO with LoRA) improves reasoning accuracy as math problem difficulty increases in smaller language models (up to 3B).
- Results show accuracy plateaus for harder tiers, suggesting GRPO mostly reshapes output preferences rather than reliably expanding capability to solve the most difficult samples.
- Training with GRPO on only lower-difficulty problems can match full-dataset accuracy across difficulty tiers while using about 45% of the training steps, indicating diminishing returns from including the hardest examples.
- A cross-dataset effect is observed: a GSM8K-trained GRPO model performs better on MATH’s numeric subset than a MATH-trained GRPO model, with gains of roughly 5% at 1.5B and 3% at 3B.
- The authors conclude that achievable improvements depend strongly on the base model’s initial reasoning competence and the target dataset’s difficulty distribution.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to