Is Biomedical Specialization Still Worth It? Insights from Domain-Adaptive Language Modelling with a New French Health Corpus

arXiv cs.CL / 4/9/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper evaluates domain-adaptive pre-training (DAPT) to specialize small-to-mid-sized LLMs for the French biomedical domain, aiming to improve domain performance for a non-English language setting.
  • It examines whether DAPT helps without causing unacceptable general-capability degradation, addressing the trade-off between domain gains and broader generalization.
  • The authors release a fully open-licensed French biomedical corpus for commercial and open-source use, alongside trained specialized French biomedical LLMs.
  • Results cast doubt on DAPT’s overall efficacy compared with prior findings, but suggest DAPT can still be viable under smaller scale and resource constraints when applied correctly.
  • The study finds that model merging after DAPT may be essential to mitigate generalization trade-offs and can sometimes improve performance on the targeted specialized tasks.

Abstract

Large language models (LLMs) have demonstrated remarkable capabilities across diverse domains, yet their adaptation to specialized fields remains challenging, particularly for non-English languages. This study investigates domain-adaptive pre-training (DAPT) as a strategy for specializing small to mid-sized LLMs in the French biomedical domain through continued pre-training. We address two key research questions: the viability of specialized continued pre-training for domain adaptation and the relationship between domain-specific performance gains and general capability degradation. Our contributions include the release of a fully open-licensed French biomedical corpus suitable for commercial and open-source applications, the training and release of specialized French biomedical LLMs, and novel insights for DAPT implementation. Our methodology encompasses the collection and refinement of high-quality French biomedical texts, the exploration of causal language modeling approaches using DAPT, and conducting extensive comparative evaluations. Our results cast doubt on the efficacy of DAPT, in contrast to previous works, but we highlight its viability in smaller-scale, resource-constrained scenarios under the right conditions. Findings in this paper further suggest that model merging post-DAPT is essential to mitigate generalization trade-offs, and in some cases even improves performance on specialized tasks at which the DAPT was directed.