AI Navigate

Quantifying Membership Disclosure Risk for Tabular Synthetic Data Using Kernel Density Estimators

arXiv cs.LG / 3/12/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • This paper proposes a KDE-based method to quantify membership disclosure risk in tabular synthetic data.
  • It models the distribution of nearest-neighbor distances between synthetic data and training records to enable probabilistic membership inference and ROC-based evaluation.
  • The paper introduces two attack models: a True Distribution Attack with privileged training data access and a Realistic Attack using only auxiliary data.
  • Empirical evaluation across four real-world datasets and six generators shows the KDE approach achieves higher F1 scores and sharper risk characterization than prior baselines, without relying on expensive shadow models.
  • A practical framework and metrics for post-generation risk assessment are provided, with datasets and code released for practitioners.

Abstract

The use of synthetic data has become increasingly popular as a privacy-preserving alternative to sharing real datasets, especially in sensitive domains such as healthcare, finance, and demography. However, the privacy assurances of synthetic data are not absolute, and remain susceptible to membership inference attacks (MIAs), where adversaries aim to determine whether a specific individual was present in the dataset used to train the generator. In this work, we propose a practical and effective method to quantify membership disclosure risk in tabular synthetic datasets using kernel density estimators (KDEs). Our KDE-based approach models the distribution of nearest-neighbour distances between synthetic data and the training records, allowing probabilistic inference of membership and enabling robust evaluation via ROC curves. We propose two attack models: a 'True Distribution Attack', which assumes privileged access to training data, and a more realistic, implementable 'Realistic Attack' that uses auxiliary data without true membership labels. Empirical evaluations across four real-world datasets and six synthetic data generators demonstrate that our method consistently achieves higher F1 scores and sharper risk characterization than a prior baseline approach, without requiring computationally expensive shadow models. The proposed method provides a practical framework and metric for quantifying membership disclosure risk in synthetic data, which enables data custodians to conduct a post-generation risk assessment prior to releasing their synthetic datasets for downstream use. The datasets and codes for this study are available at https://github.com/PyCoder913/MIA-KDE.