AI Navigate

Balanced Thinking: Improving Chain of Thought Training in Vision Language Models

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • SCALe (Scheduled Curriculum Adaptive Loss) separates supervision over reasoning and answer segments with a length-independent, dynamic weighting to address token-imbalance in standard SFT.
  • SCALe-SFT uses a cosine scheduling policy to gradually shift training focus from the <think> segment to the <answer> segment, promoting concise and well-grounded reasoning.
  • Empirical results show SCALe improves accuracy over vanilla SFT and matches the performance of the full two-phase SFT + GRPO pipeline while requiring only about one-seventh of training time.
  • When combined with GRPO, SCALe delivers the best overall performance, highlighting its value as a standalone method and as a foundation for reinforcement refinement.

Abstract

Multimodal reasoning in vision-language models (VLMs) typically relies on a two-stage process: supervised fine-tuning (SFT) and reinforcement learning (RL). In standard SFT, all tokens contribute equally to the loss, even though reasoning data are inherently token-imbalanced. Long traces overshadow short but task-critical segments, leading to verbose reasoning and inaccurate answers. We propose SCALe (Scheduled Curriculum Adaptive Loss), which explicitly separates supervision over reasoning and answer segments using dynamic, length-independent weighting. Unlike vanilla SFT, which overweights the segment, SCALe-SFT gradually shifts the focus from to throughout training via a cosine scheduling policy, encouraging concise and well-grounded reasoning. We evaluate SCALe across diverse benchmarks and architectures. Results show that SCALe consistently improves accuracy over vanilla SFT and matches the performance of the full two-phase SFT + GRPO pipeline while requiring only about one-seventh of the training time, making it a lightweight yet effective alternative. When combined with GRPO, SCALe achieves the best overall performance, highlighting its value both as a standalone method and as a strong foundation for reinforcement refinement.