From Prompt to Physical Actuation: Holistic Threat Modeling of LLM-Enabled Robotic Systems
arXiv cs.RO / 5/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper warns that when LLMs are used in autonomous robots for planning and control, malicious or unsafe prompts/outputs can propagate through the decision pipeline and cause real-world physical harm.
- It proposes a unified architectural threat model for an edge-cloud LLM-enabled robot using a hierarchical Data Flow Diagram and STRIDE-per-interaction analysis.
- By analyzing six boundary-crossing interaction points with a taxonomy covering conventional cyber threats, adversarial threats, and conversational threats, the study shows these threat types converge at the same boundaries.
- The authors trace three cross-boundary attack chains that can ultimately lead to unsafe actuation, highlighting architectural weaknesses such as missing semantic validation, risky cross-modal translation (vision to language instructions), and insufficient mediation during provider-side tool use.
- The work claims novelty as the first DFD-based approach that integrates all three threat categories across the full perception–planning–actuation pipeline for LLM-enabled robotic systems.
Related Articles

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Why Enterprise AI Pilots Fail
Dev.to

The PDF Feature Nobody Asked For (That I Use Every Day)
Dev.to

How to Fix OpenClaw Tool Calling Issues
Dev.to

Mistral's new flagship Medium 3.5 folds chat, reasoning, and code into one model
THE DECODER