Not an Obstacle for Dog, but a Hazard for Human: A Co-Ego Navigation System for Guide Dog Robots

arXiv cs.RO / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Co-Ego, a dual-branch obstacle avoidance system that fuses robot-ground sensing with the user's elevated egocentric perspective to improve navigation safety for quadruped guide robots.
  • It identifies the viewpoint asymmetry problem: hazards that are transparent to robot sensors, such as bent branches, can threaten humans even when robots detect no obstacle.
  • The authors evaluated the approach on a quadruped platform in a controlled user study with sighted participants under blindfold, comparing unassisted, single-view, and cross-view fusion conditions.
  • Results show that cross-view fusion reduces collision times and cognitive load, demonstrating the value of viewpoint complementarity for safe navigation.
  • The work positions Co-Ego as the first explicit solution to viewpoint asymmetry in robotic guide-dog navigation, with potential implications for accessibility and safety in BLV mobility.

Abstract

Guide dogs offer independence to Blind and Low-Vision (BLV) individuals, yet their limited availability leaves the vast majority of BLV users without access. Quadruped robotic guide dogs present a promising alternative, but existing systems rely solely on the robot's ground-level sensors for navigation, overlooking a critical class of hazards: obstacles that are transparent to the robot yet dangerous at human body height, such as bent branches. We term this the viewpoint asymmetry problem and present the first system to explicitly address it. Our Co-Ego system adopts a dual-branch obstacle avoidance framework that integrates the robot-centric ground sensing with the user's elevated egocentric perspective to ensure comprehensive navigation safety. Deployed on a quadruped robot, the system is evaluated in a controlled user study with sighted participants under blindfold across three conditions: unassisted, single-view, and cross-view fusion. Results demonstrate that cross-view fusion significantly reduces collision times and cognitive load, verifying the necessity of viewpoint complementarity for safe robotic guide dog navigation.