Detecting Data Contamination in Large Language Models

arXiv cs.AI / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines how Membership Inference Attacks (MIA) could be used to detect whether specific copyrighted or sensitive documents were included in the training data of large language models (LLMs).
  • It compares leading state-of-the-art black-box MIA methods using a unified dataset setup under black-box assumptions to evaluate whether any approach can reliably perform membership detection.
  • A new technique called “Familiarity Ranking” is introduced as an example of how black-box MIA might be structured while encouraging more expressive model behavior for reasoning analysis.
  • The study finds that none of the evaluated methods can reliably detect membership in modern LLMs, with AUC-ROC around 0.5 across multiple models, indicating near-random performance.
  • The observed higher true-positive and false-positive rates for more advanced LLMs suggest improved reasoning and generalization, which makes black-box membership detection increasingly difficult.

Abstract

Large Language Models (LLMs) utilize large amounts of data for their training, some of which may come from copyrighted sources. Membership Inference Attacks (MIA) aim to detect those documents and whether they have been included in the training corpora of the LLMs. The black-box MIAs require a significant amount of data manipulation; therefore, their comparison is often challenging. We study state-of-the-art (SOTA) MIAs under the black-box assumptions and compare them to each other using a unified set of datasets to determine if any of them can reliably detect membership under SOTA LLMs. In addition, a new method, called the Familiarity Ranking, was developed to showcase a possible approach to black-box MIAs, thereby giving LLMs more freedom in their expression to understand their reasoning better. The results indicate that none of the methods are capable of reliably detecting membership in LLMs, as shown by an AUC-ROC of approximately 0.5 for all methods across several LLMs. The higher TPR and FPR for more advanced LLMs indicate higher reasoning and generalizing capabilities, showcasing the difficulty of detecting membership in LLMs using black-box MIAs.