PermaFrost-Attack: Stealth Pretraining Seeding(SPS) for planting Logic Landmines During LLM Training
arXiv cs.AI / 4/27/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Stealth Pretraining Seeding (SPS), an attack that hides poisoned training content on “stealth” websites and relies on web crawlers to incorporate it into future LLM training corpora.
- Because each poisoned payload is tiny, diffuse, and seemingly benign, SPS can evade detection during dataset construction, filtering, and standard evaluation.
- The authors demonstrate a “latent logic landmine” effect, where dormant harmful behavior can be triggered later by precise alphanumeric triggers to bypass safety safeguards.
- The study operationalizes the threat as “PermaFrost-Attack” and introduces geometric diagnostic tools (Thermodynamic Length, Spectral Curvature, and Infection Traceback Graph) to analyze and understand the hidden vulnerabilities.
- Experiments across multiple model families and scales suggest SPS is broadly effective at inducing persistent unsafe behavior while often evading alignment defenses, making it an underappreciated risk to future foundation models.
Related Articles

The company with a monopoly on AI's most critical machine is racing to build more
THE DECODER

Legal Insight Transformation: 7 Mistakes to Avoid When Adopting AI Tools
Dev.to

Legal Insight Transformation: Traditional vs. AI-Driven Research Compared
Dev.to

Legal Insight Transformation: A Beginner's Guide to Modern Research
Dev.to
The Open Source AI Studio That Nobody's Talking About
Dev.to