ExpressMM: Expressive Mobile Manipulation Behaviors in Human-Robot Interactions
arXiv cs.RO / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces ExpressMM, a framework for generating expressive behaviors in mobile manipulators during human-robot collaborative tasks, aiming to communicate intent to nearby people.
- ExpressMM combines a high-level language-guided planner using a vision-language model for perception and conversational reasoning with a low-level vision-language-action policy to produce task-appropriate expressive motions.
- A key contribution is interruptible interaction support, enabling users to modify or redirect robot actions mid-execution rather than relying on fixed or demonstration-only behaviors.
- The authors validate the approach on a mobile manipulator for collaborative assembly, including live audience-based HRI demonstrations and questionnaire-based evaluation of perceived interpretability, safety, and predictability.
Related Articles

Black Hat Asia
AI Business
[N] Just found out that Milla Jovovich is a dev, invested in AI, and just open sourced a project
Reddit r/MachineLearning
ALTK‑Evolve: On‑the‑Job Learning for AI Agents
Hugging Face Blog
Context Windows Are Getting Absurd — And That's a Good Thing
Dev.to
Every AI Agent Registry in 2026, Compared
Dev.to