AI Navigate

GRPO and Reflection Reward for Mathematical Reasoning in Large Language Models

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors introduce a four-stage framework that couples Group Relative Policy Optimization (GRPO) with reflection reward mechanisms to strengthen LLMs' self-reflective mathematical reasoning during training.
  • The approach also incorporates conventional accuracy and format rewards to ensure reliable and well-structured outputs.
  • Experimental results show GRPO with reflection-encouraged training achieves state-of-the-art performance, with ablation studies highlighting the pivotal role of the reflection reward.
  • The paper finds full-parameter supervised fine-tuning (SFT) outperforms LoRA, albeit with greater computational demands, and envisions GRPO as a post-training optimization enabling future intelligent agents through cognitive rewards and dynamic environmental interactions.

Abstract

The enhancement of reasoning capabilities in large language models (LLMs) has garnered significant attention, with supervised fine-tuning (SFT) and reinforcement learning emerging as dominant paradigms. While recent studies recognize the importance of reflection in reasoning processes, existing methodologies seldom address proactive reflection encouragement during training. This study focuses on mathematical reasoning by proposing a four-stage framework integrating Group Relative Policy Optimization (GRPO) with reflection reward mechanisms to strengthen LLMs' self-reflective capabilities. Besides, this approach incorporates established accuracy and format reward. Experimental results demonstrate GRPO's state-of-the-art performance through reflection-encouraged training, with ablation studies confirming the reflection reward's pivotal role. Comparative evaluations demonstrate full-parameter SFT's superiority over low-rank adaptation (LoRA) despite heightened computational demands. Building on these cumulative findings, this research substantiates GRPO's methodological significance in post-training optimization and envisions its potential to serve as a pivotal enabler for future LLM-based intelligent agents through the synergistic integration of cognitive rewards with dynamic environmental interactions.