Do Robots Need Body Language? Comparing Communication Modalities for Legible Motion Intent in Human-Shared Spaces
arXiv cs.RO / 4/7/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates how people interpret a high-DoF quadruped robot’s intended navigation actions in human-shared spaces, focusing on legibility and perceived motion intent.
- Using an online video study with Boston Dynamics Spot across four scenarios, it compares implicit expressive motion cues against explicit signaling modalities such as lights, text, and audio.
- It measures how each modality affects users’ prediction accuracy, confidence, and trust that the robot will act safely.
- The study evaluates whether aligned multimodal cues improve interpretability and how conflicting cues can undermine confidence and trust.
- Overall, it provides initial evidence on the relative effectiveness of implicit versus explicit signaling strategies for making robot motion intent more understandable.
Related Articles

Black Hat Asia
AI Business

Meta Superintelligence Lab Releases Muse Spark: A Multimodal Reasoning Model With Thought Compression and Parallel Agents
MarkTechPost

Chatbots are great at manipulating people to buy stuff, Princeton boffins find
The Register
I tested and ranked every ai companion app I tried and here's my honest breakdown
Reddit r/artificial

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to