Safety in Embodied AI: A Survey of Risks, Attacks, and Defenses
arXiv cs.CV / 5/6/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper surveys safety risks in Embodied AI, where agents perceive, plan, act, and interact in open-world, safety-critical settings like transportation, healthcare, and robotics.
- It highlights why embodied systems are uniquely dangerous compared with purely digital AI, due to uncertain sensing, incomplete knowledge, and dynamic human-robot interactions that can cause direct physical harm.
- The authors propose a multi-level taxonomy covering attacks and defenses across the entire embodied pipeline, from perception and cognition to planning, action, and interaction.
- Drawing on 400+ papers, the review synthesizes work on adversarial, backdoor, jailbreak, and hardware-level attacks, along with methods for detection, safe training, and robust inference.
- It identifies key underexplored challenges, including fragility in multimodal perception fusion, planning instability under jailbreak attacks, and challenges in trustworthy interaction in open-ended scenarios.
Related Articles

Top 10 Free AI Tools for Students in 2026: The Ultimate Study Guide
Dev.to

AI as Your Contingency Co-Pilot: Automating Wedding Day 'What-Ifs'
Dev.to

Google AI Releases Multi-Token Prediction (MTP) Drafters for Gemma 4: Delivering Up to 3x Faster Inference Without Quality Loss
MarkTechPost
When Claude Hallucinates in Court: The Latham & Watkins Incident and What It Means for Attorney Liability
MarkTechPost
Solidity LM surpasses Opus
Reddit r/LocalLLaMA