M2HRI: An LLM-Driven Multimodal Multi-Agent Framework for Personalized Human-Robot Interaction
arXiv cs.RO / 4/15/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces M2HRI, an LLM-driven multimodal multi-agent framework for multi-robot human-robot interaction that treats robots as distinct individuals rather than functionally identical units.
- It equips each robot with LLM-based personality traits and long-term memory, and uses a coordination mechanism conditioned on those individual differences.
- In a controlled user study with 105 participants, LLM-derived personality traits were found to be distinguishable and to improve interaction quality.
- The study also shows that long-term memory boosts personalization and users’ preference awareness, while centralized coordination reduces behavioral overlap and improves overall interaction quality.
- The results conclude that both agent individuality and structured coordination are key to coherent and socially appropriate multi-agent HRI.
Related Articles

Black Hat Asia
AI Business
Are gamers being used as free labeling labor? The rise of "Simulators" that look like AI training grounds [D]
Reddit r/MachineLearning

I built a trading intelligence MCP server in 2 days — here's how
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Qwen3.5-35B running well on RTX4060 Ti 16GB at 60 tok/s
Reddit r/LocalLLaMA