Nexus: Same Pretraining Loss, Better Downstream Generalization via Common Minima

arXiv cs.LG / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies a geometric aspect of LLM pretraining, asking whether models converge to a common minimizer across data sources or simply a minimizer of total summed loss, and links this to downstream generalization.
  • It finds that common optimizers like AdamW frequently lead to task-specific minima that are far apart, which may harm out-of-distribution performance.
  • The authors propose the Nexus optimizer that increases gradient similarity during training to encourage “closer” task-specific minima despite reaching the same final pretraining loss.
  • Experiments across 130M–3B parameter models and multiple data mixtures/hyperparameter schedules show Nexus delivers significant downstream gains, including reported improvements on GSM8k and reduced out-of-distribution loss for the 3B model.
  • The work argues that pretraining loss alone is an insufficient proxy for evaluation, highlighting the role of implicit optimization biases in achieving better generalization.

Abstract

Pretraining is the cornerstone of Large Language Models (LLMs), dominating the vast majority of computational budget and data to serve as the primary engine for their capabilities. During pretraining, LLMs acquire foundational knowledge from an unprecedentedly massive and diverse data sources, encompassing a vast array of domains such as general language, mathematics, code, and complex reasoning. In this work, we investigate an interesting geometric question regarding the converged state of pretraining: Does the model converge to a common minimizer across all data sources (e.g., \cref{fig:cwa_illustration:close}), or merely a minimizer of the summed loss (e.g., \cref{fig:cwa_illustration:distant})? We hypothesize that the geometric "closeness" of task-specific minima is intrinsically linked to downstream generalization. We reveal that standard optimizers (e.g., AdamW) often converge to points where task-specific minima are distant from each other. To address this, we propose the Nexus optimizer, which encourages the closeness of these minima by maximizing gradient similarity during optimization. Experiments across models ranging from 130M to 3B parameters, various data mixtures and hyperparameter schedules, show that Nexus \textit{significantly boosts downstream performance}, despite \textit{achieving the same pretraining loss} (see \cref{fig:demo:benchmark}). Notably, on the 3B model, Nexus reduces the out-of-distribution loss by 0.012 and yields up to a 15.0\% accuracy improvement on complex reasoning tasks (e.g., GSM8k). This finding challenges the reliance on pretraining loss as the sole proxy for model evaluation and demonstrates the importance of implicit biases in unlocking downstream generalization.