Estimating near-verbatim extraction risk in language models with decoding-constrained beam search
arXiv cs.LG / 3/27/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that existing memorization/extraction-risk measurement methods based on greedy decoding miss variation in risk across different sequences and fail to capture near-verbatim extraction cases.
- It introduces probabilistic extraction as a way to compute the likelihood of generating a target suffix from a given prefix under a decoding scheme, but notes this is computationally limited to purely verbatim memorization.
- The authors propose decoding-constrained beam search to approximate near-verbatim extraction risk efficiently, producing deterministic lower bounds at roughly the cost of about 20 Monte Carlo samples per sequence.
- Experiments show the method reveals substantially more extractable sequences, higher per-sequence extraction probability mass, and model/text-dependent patterns that verbatim-only approaches cannot detect.
- Overall, the work targets a major privacy/copyright-relevant blind spot by making near-verbatim memorization risk measurable without prohibitive sampling costs.
広告
Related Articles

Got My 39-Agent System Audited Live. Here's What the Maturity Scorecard Revealed.
Dev.to

The Redline Economy
Dev.to

$500 GPU outperforms Claude Sonnet on coding benchmarks
Dev.to

From Scattershot to Sniper: AI for Hyper-Personalized Media Lists
Dev.to

The LiteLLM Supply Chain Attack: A Wake-Up Call for AI Infrastructure
Dev.to