AI Navigate

From Weak Cues to Real Identities: Evaluating Inference-Driven De-Anonymization in LLM Agents

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces 'inference-driven linkage' as a privacy risk where LLM-based agents reconstruct real-world identities from sparse cues and public information without bespoke engineering.
  • It evaluates this threat across three settings—classical linkage (Netflix and AOL), InferLink benchmarks, and modern text-rich artifacts—and finds agents can perform both fixed-pool matching and open-ended identity resolution without task-specific heuristics.
  • In the Netflix Prize setting, an agent reconstructs 79.2% of identities, significantly higher than a 56.0% classical baseline.
  • The linkage can arise even without adversarial prompts and as a byproduct of benign cross-source analysis and unstructured narratives.
  • The study argues that identity inference must be treated as a first-class privacy risk and that evaluations should measure what identities an agent can infer.

Abstract

Anonymization is widely treated as a practical safeguard because re-identifying anonymous records was historically costly, requiring domain expertise, tailored algorithms, and manual corroboration. We study a growing privacy risk that may weaken this barrier: LLM-based agents can autonomously reconstruct real-world identities from scattered, individually non-identifying cues. By combining these sparse cues with public information, agents resolve identities without bespoke engineering. We formalize this threat as \emph{inference-driven linkage} and systematically evaluate it across three settings: classical linkage scenarios (Netflix and AOL), \emph{InferLink} (a controlled benchmark varying task intent, shared cues, and attacker knowledge), and modern text-rich artifacts. Without task-specific heuristics, agents successfully execute both fixed-pool matching and open-ended identity resolution. In the Netflix Prize setting, an agent reconstructs 79.2\% of identities, significantly outperforming a 56.0\% classical baseline. Furthermore, linkage emerges not only under explicit adversarial prompts but also as a byproduct of benign cross-source analysis in \emph{InferLink} and unstructured research narratives. These findings establish that identity inference -- not merely explicit information disclosure -- must be treated as a first-class privacy risk; evaluations must measure what identities an agent can infer.