Selective Fine-Tuning of GPT Architectures for Parameter-Efficient Clinical Text Classification
arXiv cs.CL / 3/17/2026
📰 NewsModels & Research
Key Points
- This study proposes a parameter-efficient selective fine-tuning framework for adapting GPT-2 to clinical text classification tasks by freezing most of the network and updating only the final Transformer block, the final layer normalization module, and a lightweight classification head.
- On 50,000 radiology reports from the MIMIC-IV-Note dataset, it achieves approximately 91% classification accuracy while updating fewer than 6% of the model parameters.
- The approach aims to reduce computational resources and preserve pretrained contextual representations, enabling scalable deployment in clinical NLP tasks.
- Comparative experiments show selective fine-tuning provides a favorable balance between predictive performance and efficiency compared with head-only training and full-model fine-tuning.
Related Articles
Data Augmentation Using GANs
Dev.to
ItinBench: Benchmarking Planning Across Multiple Cognitive Dimensions with Large Language Models
arXiv cs.AI
Hyperagents
arXiv cs.AI
Teaching an Agent to Sketch One Part at a Time
arXiv cs.AI
PowerLens: Taming LLM Agents for Safe and Personalized Mobile Power Management
arXiv cs.AI