Advancing Multi-Robot Networks via MLLM-Driven Sensing, Communication, and Computation: A Comprehensive Survey

arXiv cs.RO / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The article is a comprehensive survey of multi-robot networks coordinated by multimodal large language models (MLLMs), focusing on how teams of robots share sensing, communication, and computation under real resource constraints.
  • It frames multi-robot coordination as an “intent-to-resource orchestration” problem, where high-level natural-language goals are used to select sensing modalities, allocate bandwidth, and choose where computation runs.
  • The survey reviews end-to-end system designs that split reasoning across on-device models and edge/cloud servers, addressing practical limits like network overload when robots transmit rich multimodal data.
  • It includes four demonstration scenarios (e.g., digital-twin warehouse navigation, proactive MCS control, FollowMe semantic sensing, and real-hardware open-vocabulary trash sorting) and evaluates approaches using system-level metrics such as payload, latency, and success.
  • The key takeaway is that jointly optimizing sensing, communication, and computation via MLLM-guided orchestration can outperform purely on-device baselines in task performance.

Abstract

Imagine advanced humanoid robots, powered by multimodal large language models (MLLMs), coordinating missions across industries like warehouse logistics, manufacturing, and safety rescue. While individual robots show local autonomy, realistic tasks demand coordination among multiple agents sharing vast streams of sensor data. Communication is indispensable, yet transmitting comprehensive data can overwhelm networks, especially when a system-level orchestrator or cloud-based MLLM fuses multimodal inputs for route planning or anomaly detection. These tasks are often initiated by high-level natural language instructions. This intent serves as a filter for resource optimization: by understanding the goal via MLLMs, the system can selectively activate relevant sensing modalities, dynamically allocate bandwidth, and determine computation placement. Thus, R2X is fundamentally an intent-to-resource orchestration problem where sensing, communication, and computation are jointly optimized to maximize task-level success under resource constraints. This survey examines how integrated design paves the way for multi-robot coordination under MLLM guidance. We review state-of-the-art sensing modalities, communication strategies, and computing approaches, highlighting how reasoning is split between on-device models and powerful edge/cloud servers. We present four end-to-end demonstrations (sense -> communicate -> compute -> act): (i) digital-twin warehouse navigation with predictive link context, (ii) mobility-driven proactive MCS control, (iii) a FollowMe robot with a semantic-sensing switch, and (iv) real-hardware open-vocabulary trash sorting via edge-assisted MLLM grounding. We emphasize system-level metrics -- payload, latency, and success -- to show why R2X orchestration outperforms purely on-device baselines.