PermaFrost-Attack: Stealth Pretraining Seeding(SPS) for planting Logic Landmines During LLM Training

arXiv cs.AI / 4/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Stealth Pretraining Seeding (SPS), an attack that hides poisoned training content on “stealth” websites and relies on web crawlers to incorporate it into future LLM training corpora.
  • Because each poisoned payload is tiny, diffuse, and seemingly benign, SPS can evade detection during dataset construction, filtering, and standard evaluation.
  • The authors demonstrate a “latent logic landmine” effect, where dormant harmful behavior can be triggered later by precise alphanumeric triggers to bypass safety safeguards.
  • The study operationalizes the threat as “PermaFrost-Attack” and introduces geometric diagnostic tools (Thermodynamic Length, Spectral Curvature, and Infection Traceback Graph) to analyze and understand the hidden vulnerabilities.
  • Experiments across multiple model families and scales suggest SPS is broadly effective at inducing persistent unsafe behavior while often evading alignment defenses, making it an underappreciated risk to future foundation models.

Abstract

Aligned large language models(LLMs) remain vulnerable to adversarial manipulation, and their dependence on web-scale pretraining creates a subtle but serious attack surface. We study Stealth Pretraining Seeding (SPS), a new attack family in which adversaries distribute small amounts of poisoned content across stealth websites, expose them to web crawlers through robots.txt, and thereby increase the likelihood that such content is absorbed into future training corpora derived from sources such as Common Crawl. Because each individual payload is tiny, diffuse, and superficially benign, the attack is difficult to detect during dataset construction or filtering. The result is a latent form of poisoning: dormant logic landmines embedded during pretraining that remain largely invisible under standard evaluation, yet can later be activated by precise alphanumeric triggers such as <00TRIGGER00> to bypass safeguards. We call this attack PermaFrost, by analogy to Arctic permafrost: harmful material can remain frozen, buried, and unnoticed for long periods, only to resurface when conditions allow. We operationalize this threat through PermaFrost-Attack, a controlled framework for latent conceptual poisoning, together with a suite of geometric diagnostics: Thermodynamic Length, Spectral Curvature, and the Infection Traceback Graph. Across multiple model families and scales, we show that SPS is broadly effective, inducing persistent unsafe behavior while often evading alignment defenses. Our results identify SPS as a practical and underappreciated threat to future foundation models. This paper introduces a novel geometric diagnostic lens for systematically examining latent model behavior, providing a principled foundation for detecting, characterizing, and understanding vulnerabilities that may remain invisible to standard evaluation.