Parameter-Efficient Token Embedding Editing for Clinical Class-Level Unlearning
arXiv cs.AI / 3/23/2026
📰 NewsModels & Research
Key Points
- STEU is a parameter-efficient method for clinical language models that performs class-level unlearning by updating only PMI-selected token embeddings and a small classifier head, while keeping all encoder layers frozen.
- It achieves near-complete forgetting on MIMIC-IV (forget F1 = 0.0004) and maintains retained task performance (avg F1 = 0.4766) with only about 0.19% of parameters modified.
- The approach was evaluated across MIMIC-IV, MIMIC-III, and eICU using BioClinicalBERT, BERT-base, and DistilBERT, showing consistent forgetting with minimal utility loss.
- This work suggests targeted behavioral unlearning can be achieved via sparse embedding edits without modifying deeper encoder representations, offering privacy-preserving model maintenance.
Related Articles
Data Augmentation Using GANs
Dev.to
Zero Shot Deformation Reconstruction for Soft Robots Using a Flexible Sensor Array and Cage Based 3D Gaussian Modeling
arXiv cs.RO
Speculative Policy Orchestration: A Latency-Resilient Framework for Cloud-Robotic Manipulation
arXiv cs.RO
ReMAP-DP: Reprojected Multi-view Aligned PointMaps for Diffusion Policy
arXiv cs.RO
AGILE: A Comprehensive Workflow for Humanoid Loco-Manipulation Learning
arXiv cs.RO