Semantic Denial of Service in LLM-controlled robots

arXiv cs.AI / 4/29/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper shows that LLM-based safety instruction-following for robots can introduce an availability vulnerability, allowing attackers to disrupt robot behavior without jailbreaking or overriding policies.
  • By injecting very short, safety-plausible phrases (1–5 tokens) into a robot’s audio channel, an adversary can trigger the LLM’s safety reasoning to halt, delay, or otherwise disrupt execution.
  • Across four vision-language models and multiple defenses/deployment settings, prompt-only defenses often reduce “hard-stop” attacks on some models but shift the failure into other disruption forms such as acknowledgement loops and false alerts, quantified via Disruption Success Rate (DSR).
  • The study finds that varying the injected safety phrases is consistently more effective than repeating the same phrase, implying the models treat diverse safety cues as corroborating evidence.
  • The authors argue the mitigation should be architectural: systems that route unauthenticated audio text directly into the LLM create an avoidable security dependency between safety monitoring and action selection.

Abstract

Safety-oriented instruction-following is supposed to keep LLM-controlled robots safe. We show it also creates an availability attack surface. By injecting short safety-plausible phrases (1-5 tokens) into a robots audio channel, an adversary can trigger the models safety reasoning to halt or disrupt execution without jailbreaking the model or overriding its policy. In the embodied setting, this is a semantic denial-of-service attack: the agent stops because the injected signal looks like a legitimate alert. Across four vision-language models, seven prompt-level defenses, three deployment modes, and single- and multi-injection settings, we find that prompt-only defenses trade off attack suppression against genuine hazard response. The strongest defenses reduce hard-stop attack success on some models, but defenses change the form of disruption, not its fact: suppressed hard stops re-emerge as acknowledge loops and false alerts, which we measure with Disruption Success Rate (DSR). We further find that injection variety is consistently more effective than repeating the same phrase, suggesting that models treat diverse safety cues as corroborating evidence. The practical implication is architectural rather than prompt-level: systems that route unauthenticated audio text directly into the LLM create an avoidable security dependency between safety monitoring and action selection.