To Adapt or not to Adapt, Rethinking the Value of Medical Knowledge-Aware Large Language Models

arXiv cs.CL / 4/9/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study evaluates whether medical-knowledge-aware (clinical) LLMs reliably beat general-purpose LLMs on multiple-choice clinical QA in English and Spanish, using both standard and perturbation-based robustness benchmarks.
  • Results indicate clinical LLMs do not consistently outperform general-purpose models on English tasks, and improvements are described as marginal and unstable even under adversarial/perturbed evaluations.
  • In contrast, for Spanish subsets the introduced Marmoka 8B clinical LLM family performs better than a Llama-based general counterpart, suggesting adaptation can help in low-resource settings.
  • The authors also find that both general and clinical models commonly struggle with instruction following and strict output formatting, implying current short-form MCQA benchmarks may miss aspects of true medical competence.
  • They propose that robust medical LLMs can be developed for low-resource languages via continual domain-adaptive pretraining on medical corpora and instruction data.

Abstract

BACKGROUND: Recent studies have shown that domain-adapted large language models (LLMs) do not consistently outperform general-purpose counterparts on standard medical benchmarks, raising questions about the need for specialized clinical adaptation. METHODS: We systematically compare general and clinical LLMs on a diverse set of multiple choice clinical question answering tasks in English and Spanish. We introduce a perturbation based evaluation benchmark that probes model robustness, instruction following, and sensitivity to adversarial variations. Our evaluation includes, one-step and two-step question transformations, multi prompt testing and instruction guided assessment. We analyze a range of state-of-the-art clinical models and their general-purpose counterparts, focusing on Llama 3.1-based models. Additionally, we introduce Marmoka, a family of lightweight 8B-parameter clinical LLMs for English and Spanish, developed via continual domain-adaptive pretraining on medical corpora and instructions. RESULTS: The experiments show that clinical LLMs do not consistently outperform their general purpose counterparts on English clinical tasks, even under the proposed perturbation based benchmark. However, for the Spanish subsets the proposed Marmoka models obtain better results compared to Llama. CONCLUSIONS: Our results show that, under current short-form MCQA benchmarks, clinical LLMs offer only marginal and unstable improvements over general-purpose models in English, suggesting that existing evaluation frameworks may be insufficient to capture genuine medical expertise. We further find that both general and clinical models exhibit substantial limitations in instruction following and strict output formatting. Finally, we demonstrate that robust medical LLMs can be successfully developed for low-resource languages such as Spanish, as evidenced by the Marmoka models.