Can Continual Pre-training Bridge the Performance Gap between General-purpose and Specialized Language Models in the Medical Domain?

arXiv cs.CL / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes that continual pre-training and model merging can close the performance gap between smaller specialized LLMs and larger general-purpose models in the German medical domain.
  • It addresses limited specialized non-English data by building a high-quality German medical corpus (FineMed-de) derived from FineWeb2.
  • Using FineMed-de, the authors continually pre-train and merge three existing LLMs (7B–24B parameters) to form the DeFineMed model family, improving small-model performance on German medical benchmarks.
  • A pairwise win-rate analysis against a much larger instruction model (Mistral-Small-24B-Instruct) shows about a 3.5× increase after domain adaptation, suggesting 7B specialized models can be resource-efficient for complex medical instruction-following.
  • The study finds that while merging can recover instruction-following, it introduces trade-offs such as language mixing and greater verbosity, indicating the need for more targeted fine-tuning going forward.

Abstract

This paper narrows the performance gap between small, specialized models and significantly larger general-purpose models through domain adaptation via continual pre-training and merging. We address the scarcity of specialized non-English data by constructing a high-quality German medical corpus (FineMed-de) from FineWeb2. This corpus is used to continually pre-train and merge three well-known LLMs (ranging from 7B to 24B parameters), creating the DeFineMed model family. A comprehensive evaluation confirms that specialization dramatically enhances 7B model performance on German medical benchmarks. Furthermore, the pairwise win-rate analysis of the Qwen2.5-based models demonstrates an approximately 3.5-fold increase in the win-rate against the much larger Mistral-Small-24B-Instruct through domain adaptation. This evidence positions specialized 7B models as a competitive, resource-efficient solution for complex medical instruction-following tasks. While model merging successfully restores instruction-following abilities, a subsequent failure mode analysis reveals inherent trade-offs, including the introduction of language mixing and increased verbosity, highlighting the need for more targeted fine-tuning in future work. This research provides a robust, compliant methodology for developing specialized LLMs, serving as the foundation for practical use in German-speaking healthcare contexts.