State-Dependent Safety Failures in Multi-Turn Language Model Interaction
arXiv cs.AI / 3/18/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- STAR, a state-oriented diagnostic framework, treats dialogue history as a state transition operator to analyze safety behavior across multi-turn LLM interactions.
- The study shows that many safety failures arise from structured contextual state evolution rather than isolated prompt vulnerabilities.
- Across multiple frontier language models, the paper finds that models that seem robust under static evaluation can exhibit rapid and reproducible safety collapse under structured multi-turn interactions.
- Mechanistic analysis reveals monotonic drift away from refusal-related representations and abrupt phase transitions induced by role-conditioned context.
- The work argues for viewing language model safety as a dynamic, trajectory-dependent process and motivates new evaluation methods that consider conversational state.
Related Articles

The programming passion is melting
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA