Latent-GRPO: Group Relative Policy Optimization for Latent Reasoning

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that reinforcement learning for latent reasoning is far less stable than supervised approaches, due to shifts in both the probability density and sampling dynamics in latent space.
  • It identifies three coupled failure bottlenecks when adapting GRPO to latent reasoning: invalid off-manifold exploration, mismatch between trajectory-level rewards and token-level updates, and invalid averaging from mixing multiple latent paths.
  • The authors propose Latent-GRPO, combining invalid-sample advantage masking, one-sided noise sampling, and an optimal first-token selection strategy for the correct path.
  • Experiments on eight benchmarks show Latent-GRPO improves over latent initialization by 7.86 Pass@1 on low-difficulty tasks and beats explicit GRPO by 4.27 on high-difficulty tasks, while using 3–4× shorter reasoning chains.
  • The method also yields stronger Pass-k performance under Gumbel sampling, positioning Latent-GRPO as a stable and efficient recipe for latent reasoning with RL-style optimization.

Abstract

Latent reasoning offers a more efficient alternative to explicit reasoning by compressing intermediate reasoning into continuous representations and substantially shortening reasoning chains. However, existing latent reasoning methods mainly focus on supervised learning, and reinforcement learning in latent space remains highly unstable. We study this problem through the lens of Group Relative Policy Optimization (GRPO), and show that directly adapting GRPO to latent reasoning is fundamentally non-trivial: latent reasoning changes both the probability density and the sampling mechanism, causing three coupled bottlenecks: absence of intrinsic latent manifolds, where unconstrained exploration pushes rollouts off the valid latent manifold; exploration-optimization misalignment, where trajectory-level rewards can induce incorrect token-level updates; and latent mixture non-closure, where jointly reinforcing multiple correct latent paths can produce an invalid averaged state. To address them, we propose \textbf{Latent-GRPO}, which combines invalid-sample advantage masking, one-sided noise sampling, and optimal correct-path first-token selection. Across four low-difficulty benchmarks (e.g., GSM8K-Aug) and four high-difficulty benchmarks (e.g., AIME), Latent-GRPO improves over its latent initialization by 7.86 Pass@1 points on low-difficulty tasks and surpasses explicit GRPO by 4.27 points on high-difficulty tasks while using 3--4\times shorter reasoning chains. It also achieves stronger pass@k performance under Gumbel sampling. These results establish Latent-GRPO as an effective approach for stable and efficient latent reasoning.