Parameter-Efficient Fine-Tuning for Medical Text Summarization: A Comparative Study of Lora, Prompt Tuning, and Full Fine-Tuning

arXiv cs.CL / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies parameter-efficient fine-tuning (PEFT) methods—LoRA, prompt tuning, and full fine-tuning—for medical text summarization using Flan-T5 models on the PubMed dataset.
  • Experiments across multiple random seeds show LoRA is consistently stronger than full fine-tuning, reaching 43.52±0.18 ROUGE-1 on Flan-T5-Large while training only about 0.6% of parameters.
  • Full fine-tuning lags behind at 40.67±0.21 ROUGE-1 under the same model family comparison, highlighting that updating all parameters may not be necessary.
  • Sensitivity analyses evaluate how LoRA rank and prompt token count affect performance, providing practical guidance for selecting PEFT hyperparameters.
  • The authors argue the low-rank constraint can act as beneficial regularization, which challenges the assumption that domain adaptation requires full parameter updates, and they release associated code.

Abstract

Fine-tuning large language models for domain-specific tasks such as medical text summarization demands substantial computational resources. Parameter-efficient fine-tuning (PEFT) methods offer promising alternatives by updating only a small fraction of parameters. This paper compares three adaptation approaches-Low-Rank Adaptation (LoRA), Prompt Tuning, and Full Fine-Tuning-across the Flan-T5 model family on the PubMed medical summarization dataset. Through experiments with multiple random seeds, we demonstrate that LoRA consistently outperforms full fine-tuning, achieving 43.52 +/- 0.18 ROUGE-1 on Flan-T5-Large with only 0.6% trainable parameters compared to 40.67 +/- 0.21 for full fine-tuning. Sensitivity analyses examine the impact of LoRA rank and prompt token count. Our findings suggest the low-rank constraint provides beneficial regularization, challenging assumptions about the necessity of full parameter updates. Code is available at https://github.com/eracoding/llm-medical-summarization