From Prompt to Physical Action: Structured Backdoor Attacks on LLM-Mediated Robotic Control Systems

arXiv cs.RO / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how LLM fine-tuning supply-chain backdoors can be used to cause malicious behavior in LLM-mediated robotic control systems, mapping from natural-language prompts to ROS2 executable actions.
  • It finds that backdoors planted at the structured JSON command-generation stage are more reliable than those targeting the natural-language reasoning stage, with stronger transfer into physical control outputs.
  • Across simulation and real-world experiments, the backdoored LoRA-based models reportedly achieve an average Attack Success Rate of 83% while maintaining high clean performance accuracy (over 93%) and sub-second latency, indicating both effectiveness and stealth.
  • The authors propose an agentic verification defense using a secondary LLM to check semantic consistency, which drops ASR to 20% but increases end-to-end latency to 8–9 seconds, highlighting a security–responsiveness trade-off for real-time robots.
  • Overall, the work emphasizes structural vulnerabilities specific to embodied/robotic LLM control pipelines and calls for robotics-aware defenses tailored to how prompts become structured commands.

Abstract

The integration of large language models (LLMs) into robotic control pipelines enables natural language interfaces that translate user prompts into executable commands. However, this digital-to-physical interface introduces a critical and underexplored vulnerability: structured backdoor attacks embedded during fine-tuning. In this work, we experimentally investigate LoRA-based supply-chain backdoors in LLM-mediated ROS2 robotic control systems and evaluate their impact on physical robot execution. We construct two poisoned fine-tuning strategies targeting different stages of the command generation pipeline and reveal a key systems-level insight: back-doors embedded at the natural-language reasoning stage do not reliably propagate to executable control outputs, whereas backdoors aligned directly with structured JSON command formats successfully survive translation and trigger physical actions. In both simulation and real-world experiments, backdoored models achieve an average Attack Success Rate of 83% while maintaining over 93% Clean Performance Accuracy (CPA) and sub-second latency, demonstrating both reliability and stealth. We further implement an agentic verification defense using a secondary LLM for semantic consistency checking. Although this reduces the Attack Success Rate (ASR) to 20%, it increases end-to-end latency to 8-9 seconds, exposing a significant security-responsiveness trade-off in real-time robotic systems. These results highlight structural vulnerabilities in LLM-mediated robotic control architectures and underscore the need for robotics-aware defenses for embodied AI systems.