AI Navigate

Pre-training LLM without Learning Rate Decay Enhances Supervised Fine-Tuning

arXiv cs.CL / 3/18/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates the role of learning rate scheduling during pre-training of large language models, introducing Warmup-Stable-Only (WSO) which maintains a constant learning rate after warmup with no decay.
  • Experiments on 1B and 8B parameter models show that WSO yields better downstream performance after supervised fine-tuning (SFT) than decay-based schedulers, even if those schedulers perform better during pre-training.
  • The results hold across different training regimes, including mid-training and over-training, and are supported by loss landscape analysis showing decay schedulers drive sharper minima while WSO preserves flatter minima.
  • The findings offer practical guidance for training and release strategies, suggesting pre-training with WSO enhances downstream adaptability of models.

Abstract

We investigate the role of learning rate scheduling in the large-scale pre-training of large language models, focusing on its influence on downstream performance after supervised fine-tuning (SFT). Decay-based learning rate schedulers are widely used to minimize pre-training loss. However, despite their widespread use, how these schedulers affect performance after SFT remains underexplored. In this paper, we examine Warmup-Stable-Only (WSO), which maintains a constant learning rate after warmup without any decay. Through experiments with 1B and 8B parameter models, we show that WSO consistently outperforms decay-based schedulers in terms of performance after SFT, even though decay-based schedulers may exhibit better performance after pre-training. The result also holds across different regimes with mid-training and over-training. Loss landscape analysis further reveals that decay-based schedulers lead models into sharper minima, whereas WSO preserves flatter minima that support adaptability. These findings indicate that applying LR decay to improve pre-training metrics may compromise downstream adaptability. Our work also provides practical guidance for training and model release strategies, highlighting that pre-training models with WSO enhances their adaptability for downstream tasks.