SafeGuard ASF: SR Agentic Humanoid Robot System for Autonomous Industrial Safety

arXiv cs.RO / 3/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SafeGuard ASF (Agentic Security Fleet), a humanoid-robot framework aimed at enabling autonomous industrial safety in human-free “dark factories.”
  • It combines multi-modal RGB-D perception, a ReAct-based agentic reasoning layer, and learned locomotion policies running on the Unitree G1 humanoid platform.
  • The system targets three hazard scenarios—fire/smoke detection, abnormal pipeline temperature monitoring, and intruder detection in restricted areas—while supporting autonomous patrol and obstacle avoidance.
  • Reported perception performance reaches 94.2% mAP for fire/smoke detection with 127 ms latency, and locomotion policy training shows stable PPO convergence within 80,000 iterations.
  • A “ToolOrchestra” action framework structures decision-making across perception, reasoning, and actuation tools, with validation in both simulation and real-world settings.

Abstract

The rise of unmanned ``dark factories'' operating without human presence demands autonomous safety systems capable of detecting and responding to multiple hazard types. We present SafeGuard ASF (Agentic Security Fleet), a comprehensive framework deploying humanoid robots for autonomous hazard detection in industrial environments. Our system integrates multi-modal perception (RGB-D imaging), a ReAct-based agentic reasoning framework, and learned locomotion policies on the Unitree G1 humanoid platform. We address three critical hazard scenarios: fire and smoke detection, abnormal temperature monitoring in pipelines, and intruder detection in restricted zones. Our perception pipeline achieves 94.2% mAP for fire or smoke detection with 127ms latency. We train multiple locomotion policies, including dance motion tracking and velocity control, using Unitree RL Lab with PPO, demonstrating stable convergence within 80,000 training iterations. We validate our system in both simulation and real-world environments, demonstrating autonomous patrol, human detection with visual perception, and obstacle avoidance capabilities. The proposed ToolOrchestra action framework enables structured decision-making through perception, reasoning, and actuation tools.