Beyond Indistinguishability: Measuring Extraction Risk in LLM APIs

arXiv cs.LG / 4/22/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that commonly used indistinguishability metrics (e.g., differential-privacy bounds or low measured membership inference) do not reliably capture a model’s actual risk of data extraction via LLM APIs.
  • It formalizes a separation between “extraction” and indistinguishability-based privacy, showing that inextractability and indistinguishability are incomparable (bounding distinguishability does not bound extractability).
  • To fill this gap, the authors introduce (l, b)-inextractability, requiring that any black-box adversary needs at least 2^b expected queries to induce the API to emit a protected l-gram substring.
  • They provide extraction-game formulations and derive rank-based upper bounds for targeted, untargeted, and approximate extraction, along with an estimator that aggregates risk across multiple attack trials and decoding/prefix adaptations.
  • The work includes empirical evaluations across different models, demonstrates improved estimation over prior extraction-risk measures, and offers mitigation guidance spanning training, API access controls, and decoding configurations, with code released publicly.

Abstract

Indistinguishability properties such as differential privacy bounds or low empirically measured membership inference are widely treated as proxies to show a model is sufficiently protected against broader memorization risks. However, we show that indistinguishability properties are neither sufficient nor necessary for preventing data extraction in LLM APIs. We formalize a privacy-game separation between extraction and indistinguishability-based privacy, showing that indistinguishability and inextractability are incomparable: upper-bounding distinguishability does not upper-bound extractability. To address this gap, we introduce (l, b)-inextractability as a definition that requires at least 2^b expected queries for any black-box adversary to induce the LLM API to emit a protected l-gram substring. We instantiate this via a worst-case extraction game and derive a rank-based extraction risk upper bound for targeted exact extraction, as well as extensions to cover untargeted and approximate extraction. The resulting estimator captures the extraction risk over multiple attack trials and prefix adaptations. We show that it can provide a tight and efficient estimation for standard greedy extraction and an upper bound on the probabilistic extraction risk given any decoding configuration. We empirically evaluate extractability across different models, clarifying its connection to distinguishability, demonstrating its advantage over existing extraction risk estimators, and providing actionable mitigation guidelines across model training, API access, and decoding configurations in LLM API deployment. Our code is publicly available at: https://github.com/Emory-AIMS/Inextractability.