GFT: From Imitation to Reward Fine-Tuning with Unbiased Group Advantages and Dynamic Coefficient Rectification
arXiv cs.AI / 4/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes training dynamics to argue that conventional supervised fine-tuning (SFT) can be viewed as a fragile form of policy-gradient optimization, suffering from issues like sparse implicit rewards and unstable inverse-probability weighting.
- It shows how these problems can cause single-path dependency, entropy collapse, and gradient explosion, limiting the ability to combine efficient knowledge injection with strong generalization.
- To address this, the authors introduce Group Fine-Tuning (GFT), a unified post-training framework with two key components.
- Group Advantage Learning forms diverse response groups and uses normalized contrastive supervision to reduce reward sparsity, while Dynamic Coefficient Rectification adaptively bounds inverse-probability weights to stabilize training.
- Experiments report that GFT outperforms SFT-based approaches consistently and produces policies that integrate more smoothly with later reinforcement learning (RL).
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
