Perturbation Dose Responses in Recursive LLM Loops: Raw Switching, Stochastic Floors, and Persistent Escape under Append, Replace, and Dialog Updates
arXiv cs.AI / 5/5/2026
💬 OpinionModels & Research
Key Points
- The paper studies how much injected text (a “dose”) is needed to perturb 30-step recursive LLM loops from one attractor-like pattern to another, and whether that redirection persists.
- It finds that persistent redirection in append-mode recursive loops depends strongly on the memory policy, with lower tail-clipping limiting persistence (about ~16% destination-coherent persistence at dose 400) compared with full-history settings (persistence exceeding 50% at ~400 tokens and saturating at 75–80% for source-basin escape).
- A multi-part falsification suite suggests the apparent high-dose “destination-coherent dip” is a finite-horizon, endpoint-timing-sensitive effect rather than a stable structural asymmetry.
- Replace-mode “raw switching” is mostly near-saturated under the default protocol, but it appears to reflect state-reset overwrite; insert-mode probing reduces it substantially (to roughly 12–32%).
- The authors run 37 experiments on GPT-4o-mini with vendor replication on GPT-4.1-nano, emphasizing that evaluation should separate transient movement from durable escape, account for stochastic floors, and treat context-update rules as safety-relevant design parameters.
Related Articles

When Claims Freeze Because a Provider Record Drifted: The Case for Enrollment Repair Agents
Dev.to

The Refund Buried in Export Paperwork: Why Customs Drawback Claim Assembly Fits an Agent Better Than Another Research Bo
Dev.to

Gemini File Generation Guide: How to Create PDFs, Word Docs & Excel Files with AI (2026)
Dev.to

How an AI Agent Executed 500+ Real-World Operations and Built Its Own Recovery Engine
Dev.to
Qwen 3.6 27B MTP on v100 32GB: 54 t/s
Reddit r/LocalLLaMA