Segment-Aligned Policy Optimization for Multi-Modal Reasoning

arXiv cs.AI / 5/5/2026

📰 NewsModels & Research

Key Points

  • The paper argues that reinforcement learning for large language models often optimizes policies at the wrong granularity (tokens or whole sequences), which harms credit assignment and training stability in multi-modal reasoning tasks.
  • It introduces Segment-Aligned Policy Optimization (SAPO), which updates policies using coherent reasoning steps/segments instead of individual tokens or entire responses.
  • SAPO models reasoning as a step-wise Markov decision process over reasoning segments and adds segment-level value estimation, advantage computation, and importance sampling aligned to reasoning boundaries.
  • Experiments on reasoning benchmarks show SAPO outperforms token-level and sequence-level policy optimization, with notable accuracy gains as well as improved training stability and value estimation consistency.
  • The authors plan to release code and models to support reproducibility and highlight broader implications for semantically grounded RL in complex reasoning.

Abstract

Existing reinforcement learning approaches for Large Language Models typically perform policy optimization at the granularity of individual tokens or entire response sequences. However, such formulations often misalign with the natural step-wise structure of reasoning processes, leading to suboptimal credit assignment and unstable training in multi-modal reasoning tasks. To bridge this gap, we propose Segment-Aligned Policy Optimization (SAPO), a novel reinforcement learning paradigm that treats coherent reasoning steps, rather than tokens or full sequences as fundamental units of policy update. SAPO introduces a step-wise Markov decision process abstraction over reasoning segments, accompanied by segment-level value estimation, advantage computation, and importance sampling mechanisms that are semantically aligned with reasoning boundaries. Experiments on representative reasoning benchmarks demonstrate that SAPO consistently outperforms token-level and sequence-level policy optimization methods, achieving significant accuracy improvements while exhibiting better training stability and value estimation consistency. Our work underscores the importance of aligning reinforcement learning updates with the intrinsic structure of reasoning, paving the way for more efficient and semantically grounded policy optimization in complex reasoning tasks. Codes and models will be released to ensure full reproducibility.