Evaluating quality in synthetic data generation for large tabular health datasets

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the lack of consensus on concise quality metrics and benchmarks for synthetic data generation, especially for large tabular health datasets such as historical epidemiological records.
  • It evaluates seven recent synthetic data models from major machine learning families across four datasets spanning different scales, using systematic hyperparameter tuning to enable fair comparisons.
  • The authors propose a methodology to evaluate the fidelity of synthesized joint distributions, including metrics that are aligned with visualization on a single plot.
  • A domain-specific assessment of the German Cancer Registries dataset shows that models struggle to strictly adhere to medical-domain constraints.
  • The work is intended as a foundational framework to help stakeholders select appropriate synthesizers and guide the release of synthetic health datasets.

Abstract

There is no consensus in the field of synthetic data on concise metrics for quality evaluations or benchmarks on large health datasets, such as historical epidemiological data. This study presents an evaluation of seven recent models from major machine learning families. The models were evaluated using four different datasets, each with a distinct scale. To ensure a fair comparison, we systematically tuned the hyperparameters of each model for each dataset. We propose a methodology for evaluating the fidelity of synthesized joint distributions, aligning metrics with visualization on a single plot. This method is applicable to any dataset and is complemented by a domain-specific analysis of the German Cancer Registries' epidemiological dataset. The analysis reveals the challenges models face in strictly adhering to the medical domain. We hope this approach will serve as a foundational framework for guiding the selection of synthesizers and remain accessible to all stakeholders involved in releasing synthetic datasets.