Differentially Private De-identification of Dutch Clinical Notes: A Comparative Evaluation

arXiv cs.CL / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study addresses the privacy challenge of de-identifying Dutch clinical notes to enable compliant secondary use under regulations like GDPR and HIPAA.
  • It presents a first comparative evaluation of three de-identification approaches for Dutch clinical text: differential privacy (DP) methods, named entity recognition (NER)-based redaction, and LLM-based de-identification.
  • The researchers also test hybrid pipelines that use NER or LLM preprocessing before applying DP, aiming to improve the balance between privacy protection and downstream usefulness.
  • Results indicate that using DP mechanisms alone significantly reduces utility, while combining DP with linguistic preprocessing—particularly LLM-based redaction—substantially strengthens the privacy–utility trade-off.
  • The evaluation includes both privacy leakage checks and extrinsic tasks such as entity and relation classification to measure practical impact beyond redaction quality.

Abstract

Protecting patient privacy in clinical narratives is essential for enabling secondary use of healthcare data under regulations such as GDPR and HIPAA. While manual de-identification remains the gold standard, it is costly and slow, motivating the need for automated methods that combine privacy guarantees with high utility. Most automated text de-identification pipelines employed named entity recognition (NER) to identify protected entities for redaction. Although methods based on differential privacy (DP) provide formal privacy guarantees, more recently also large language models (LLMs) are increasingly used for text de-identification in the clinical domain. In this work, we present the first comparative study of DP, NER, and LLMs for Dutch clinical text de-identification. We investigate these methods separately as well as hybrid strategies that apply NER or LLM preprocessing prior to DP, and assess performance in terms of privacy leakage and extrinsic evaluation (entity and relation classification). We show that DP mechanisms alone degrade utility substantially, but combining them with linguistic preprocessing, especially LLM-based redaction, significantly improves the privacy-utility trade-off.