AI Navigate

Speak, Segment, Track, Navigate: An Interactive System for Video-Guided Skull-Base Surgery

arXiv cs.CV / 3/18/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The article introduces a speech-guided embodied agent framework for video-guided skull base surgery that responds to surgeon queries.
  • It combines natural language interaction with real-time visual perception on live intraoperative video streams, eliminating the need for external optical trackers.
  • The system starts with interactive segmentation and labeling of the surgical instrument, using the segmented instrument as a spatial anchor to support downstream tasks like anatomical segmentation, registration, tool pose estimation, and real-time overlays.
  • Evaluation shows competitive spatial accuracy compared with a commercial optical tracking system and highlights improved workflow integration and potential for rapid deployment of video-guided surgical systems.

Abstract

We introduce a speech-guided embodied agent framework for video-guided skull base surgery that dynamically executes perception and image-guidance tasks in response to surgeon queries. The proposed system integrates natural language interaction with real-time visual perception directly on live intraoperative video streams, thereby enabling surgeons to request computational assistance without disengaging from operative tasks. Unlike conventional image-guided navigation systems that rely on external optical trackers and additional hardware setup, the framework operates purely on intraoperative video. The system begins with interactive segmentation and labeling of the surgical instrument. The segmented instrument is then used as a spatial anchor that is autonomously tracked in the video stream to support downstream workflows, including anatomical segmentation, interactive registration of preoperative 3D models, monocular video-based estimation of the surgical tool pose, and support image guidance through real-time anatomical overlays.We evaluate the proposed system in video-guided skull base surgery scenarios and benchmark its tracking performance against a commercially available optical tracking system. Results demonstrate that speech-guided embodied agents can achieve competitive spatial accuracy while improving workflow integration and enabling rapid deployment of video-guided surgical systems.