Placing Puzzle Pieces Where They Matter: A Question Augmentation Framework for Reinforcement Learning

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a reinforcement-learning dilemma for LLM reasoning: easy-problem training can cause overfitting and pass@k degradation, while hard-problem training can yield sparse rewards.
  • Existing question augmentation adds partial-solution hints uniformly, but that can waste effort on redundant hints, miss key bottlenecks, and reduce reasoning diversity with too many hints.
  • It introduces PieceHint, a training-time hint injection framework that scores the importance of reasoning steps and selectively provides hints according to problem difficulty.
  • PieceHint also progressively withdraws scaffolding so the model moves from guided learning toward more independent reasoning during training.
  • Experiments on six mathematical reasoning benchmarks report that a 1.5B model matches the average performance of 32B baselines while maintaining pass@k diversity across multiple k values.

Abstract

Reinforcement learning has become a powerful approach for enhancing large language model reasoning, but faces a fundamental dilemma: training on easy problems can cause overfitting and pass@k degradation, while training on hard problems often results in sparse rewards. Recent question augmentation methods address this by prepending partial solutions as hints. However, uniform hint provision may introduce redundant information while missing critical reasoning bottlenecks, and excessive hints can reduce reasoning diversity, causing pass@k degradation. We propose \textbf{PieceHint}, a hint injection framework that strategically identifies and provides critical reasoning steps during training. By scoring the importance of different reasoning steps, selectively allocating hints based on problem difficulty, and progressively withdrawing scaffolding, PieceHint enables models to transition from guided learning to independent reasoning. Experiments on six mathematical reasoning benchmarks show that our 1.5B model achieves comparable average performance to 32B baselines while preserving pass@k diversity across all k values.