A Dual Perspective on Synthetic Trajectory Generators: Utility Framework and Privacy Vulnerabilities

arXiv cs.AI / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the privacy–utility trade-off in human mobility data, noting that mobility traces can reveal sensitive attributes even after traditional anonymization methods.
  • It proposes a new framework to evaluate the utility of synthetic trajectory generators, aiming to better measure how well generated trajectories support intended applications.
  • The authors argue that privacy evaluation remains difficult and should be handled via adversarial testing aligned with current EU regulatory expectations.
  • They introduce a new membership inference attack targeting a subset of generative models previously considered privacy-preserving due to resistance to trajectory user-linking.
  • Overall, the work provides both a utility-evaluation approach and security evidence that synthetic-data privacy can still be vulnerable.

Abstract

Human mobility data are used in numerous applications, ranging from public health to urban planning. Human mobility is inherently sensitive, as it can contain information such as religious beliefs and political affiliations. Historically, it has been proposed to modify the information using techniques such as aggregation, obfuscation, or noise addition, to adequately protect privacy and eliminate concerns. As these methods come at a great cost in utility, new methods leveraging development in generative models, were introduced. The extent to which such methods answer the privacy-utility trade-off remains an open problem. In this paper, we introduced a first step towards solving it, by the introduction and application of a new framework for utility evaluation. Furthermore, we provide evidence that privacy evaluation remains a great challenge to consider and that it should be tackled through adversarial evaluation in accordance with the current EU regulation. We propose a new membership inference attack against a subcategory of generative models, even though this subcategory was deemed private due to its resistance over the trajectory user-linking problem.