Alignment Whack-a-Mole : Finetuning Activates Verbatim Recall of Copyrighted Books in Large Language Models

arXiv cs.CL / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that despite claims that LLMs don’t memorize training data and protections like RLHF, system prompts, and output filters, finetuning can reactivate verbatim recall of copyrighted books.
  • It reports that finetuning to transform plot summaries into full text enables models such as GPT-4o, Gemini-2.5-Pro, and DeepSeek-V3.1 to reproduce up to 85–90% of held-out copyrighted books, sometimes with single copied spans over 460 words.
  • The extraction is shown to generalize across authors: finetuning on Haruki Murakami alone can unlock verbatim recall of works by 30+ other authors, and similar results appear with random author pairs and public-domain finetuning.
  • The authors attribute the effect to latent memorization stored in model weights from pretraining, noting that synthetic-text finetuning yields near-zero extraction.
  • Because multiple models from different providers exhibit similar memorization regions, the paper frames this as an industry-wide security vulnerability with implications for ongoing legal arguments about “adequate measures” against reproduction of protected expression.

Abstract

Frontier LLM companies have repeatedly assured courts and regulators that their models do not store copies of training data. They further rely on safety alignment strategies via RLHF, system prompts, and output filters to block verbatim regurgitation of copyrighted works, and have cited the efficacy of these measures in their legal defenses against copyright infringement claims. We show that finetuning bypasses these protections: by training models to expand plot summaries into full text, a task naturally suited for commercial writing assistants, we cause GPT-4o, Gemini-2.5-Pro, and DeepSeek-V3.1 to reproduce up to 85-90% of held-out copyrighted books, with single verbatim spans exceeding 460 words, using only semantic descriptions as prompts and no actual book text. This extraction generalizes across authors: finetuning exclusively on Haruki Murakami's novels unlocks verbatim recall of copyrighted books from over 30 unrelated authors. The effect is not specific to any training author or corpus: random author pairs and public-domain finetuning data produce comparable extraction, while finetuning on synthetic text yields near-zero extraction, indicating that finetuning on individual authors' works reactivates latent memorization from pretraining. Three models from different providers memorize the same books in the same regions (r \ge 0.90), pointing to an industry-wide vulnerability. Our findings offer compelling evidence that model weights store copies of copyrighted works and that the security failures that manifest after finetuning on individual authors' works undermine a key premise of recent fair use rulings, where courts have conditioned favorable outcomes on the adequacy of measures preventing reproduction of protected expression.