AI Navigate

The Finetuner's Fallacy: When to Pretrain with Your Finetuning Data

arXiv cs.LG / 3/18/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes specialized pretraining (SPT), a strategy that reuses a small domain dataset during pretraining as a fraction of total tokens to boost domain-specific performance while preserving general capabilities.
  • In experiments across ChemPile, MusicPile, and ProofPile, SPT improves domain performance after finetuning and reduces the pretraining compute needed to reach a given domain performance by up to 1.75x compared with standard pretraining.
  • SPT provides greater benefits when the target domain is underrepresented in the pretraining corpus, with scenarios where a 1B SPT model outperforms a 3B standard pretrained model on domains far from web text.
  • The authors derive overfitting scaling laws to guide how much domain data to repeat given a pretraining budget and recommend incorporating domain data early in training to maximize gains.

Abstract

Real-world model deployments demand strong performance on narrow domains where data is often scarce. Typically, practitioners finetune models to specialize them, but this risks overfitting to the domain and forgetting general knowledge. We study a simple strategy, specialized pretraining (SPT), where a small domain dataset, typically reserved for finetuning, is repeated starting from pretraining as a fraction of the total tokens. Across three specialized domains (ChemPile, MusicPile, and ProofPile), SPT improves domain performance and preserves general capabilities after finetuning compared to standard pretraining. In our experiments, SPT reduces the pretraining tokens needed to reach a given domain performance by up to 1.75x. These gains grow when the target domain is underrepresented in the pretraining corpus: on domains far from web text, a 1B SPT model outperforms a 3B standard pretrained model. Beyond these empirical gains, we derive overfitting scaling laws to guide practitioners in selecting the optimal domain-data repetition for a given pretraining compute budget. Our observations reveal the finetuner's fallacy: while finetuning may appear to be the cheapest path to domain adaptation, introducing specialized domain data during pretraining stretches its utility. SPT yields better specialized domain performance (via reduced overfitting across repeated exposures) and better general domain performance (via reduced forgetting during finetuning), ultimately achieving stronger results with fewer parameters and less total compute when amortized over inference. To get the most out of domain data, incorporate it as early in training as possible.