Selective Fine-Tuning of GPT Architectures for Parameter-Efficient Clinical Text Classification
arXiv cs.CL / 3/17/2026
📰 NewsModels & Research
Key Points
- This study proposes a parameter-efficient selective fine-tuning framework for adapting GPT-2 to clinical text classification tasks by freezing most of the network and updating only the final Transformer block, the final layer normalization module, and a lightweight classification head.
- On 50,000 radiology reports from the MIMIC-IV-Note dataset, it achieves approximately 91% classification accuracy while updating fewer than 6% of the model parameters.
- The approach aims to reduce computational resources and preserve pretrained contextual representations, enabling scalable deployment in clinical NLP tasks.
- Comparative experiments show selective fine-tuning provides a favorable balance between predictive performance and efficiency compared with head-only training and full-model fine-tuning.




![[Boost]](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D800%252Cheight%3D%252Cfit%3Dscale-down%252Cgravity%3Dauto%252Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Fuser%252Fprofile_image%252F3833034%252F44fa15e0-8eb9-4843-a424-a4a7b3538f43.jpeg&w=3840&q=75)