AI Navigate

Prompt Injection as Role Confusion

arXiv cs.AI / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The authors identify role confusion as the root cause of prompt injection vulnerabilities, noting models infer roles from writing style rather than source provenance.
  • They develop novel role probes to measure how models internally identify 'who is speaking' and to explain why injection works when text imitates a role's authority.
  • They validate their findings by injecting spoofed reasoning into user prompts and tool outputs, achieving average success rates around 60% on StrongREJECT and 61% on agent exfiltration across multiple models with near-zero baselines.
  • The results show that the degree of internal role confusion strongly predicts attack success even before generation begins.
  • They propose a unifying, mechanistic framework for prompt injection and argue that diverse prompt-injection attacks exploit the same role-confusion mechanism, raising implications for interface-level security and latent-space authority.

Abstract

Language models remain vulnerable to prompt injection attacks despite extensive safety training. We trace this failure to role confusion: models infer roles from how text is written, not where it comes from. We design novel role probes to capture how models internally identify "who is speaking." These reveal why prompt injection works: untrusted text that imitates a role inherits that role's authority. We test this insight by injecting spoofed reasoning into user prompts and tool outputs, achieving average success rates of 60% on StrongREJECT and 61% on agent exfiltration, across multiple open- and closed-weight models with near-zero baselines. Strikingly, the degree of internal role confusion strongly predicts attack success before generation begins. Our findings reveal a fundamental gap: security is defined at the interface but authority is assigned in latent space. More broadly, we introduce a unifying, mechanistic framework for prompt injection, demonstrating that diverse prompt-injection attacks exploit the same underlying role-confusion mechanism.