Where are they looking in the operating room?

arXiv cs.CV / 4/23/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “gaze-following” to the operating room, aiming to infer where surgical staff are looking to improve understanding of attention in high-stakes surgical workflows.
  • It extends existing surgical video datasets by adding gaze-following annotations to 4D-OR and gaze-following plus new team-communication activity labels to Team-OR.
  • The authors propose gaze-based models for three downstream tasks: clinical role prediction and surgical phase recognition using gaze heatmaps, and team communication detection using self-supervised spatial-temporal gaze features.
  • On the 4D-OR and Team-OR benchmarks, the method reaches state-of-the-art results, achieving F1 scores of 0.92 (role prediction) and 0.95 (phase recognition).
  • For team communication detection, the approach surpasses prior baselines by more than 30%, indicating substantial gains in recognizing coordination signals from gaze.

Abstract

Purpose: Gaze-following, the task of inferring where individuals are looking, has been widely studied in computer vision, advancing research in visual attention modeling, social scene understanding, and human-robot interaction. However, gaze-following has never been explored in the operating room (OR), a complex, high-stakes environment where visual attention plays an important role in surgical workflow analysis. In this work, we introduce the concept of gaze-following to the surgical domain, and demonstrate its great potential for understanding clinical roles, surgical phases, and team communications in the OR. Methods: We extend the 4D-OR dataset with gaze-following annotations, and extend the Team-OR dataset with gaze-following and a new team communication activity annotations. Then, we propose novel approaches to address clinical role prediction, surgical phase recognition, and team communication detection using a gaze-following model. For role and phase recognition, we propose a gaze heatmap-based approach that uses gaze predictions solely; for team communication detection, we train a spatial-temporal model in a self-supervised way that encodes gaze-based clip features, and then feed the features into a temporal activity detection model. Results: Experimental results on the 4D-OR and Team-OR datasets demonstrate that our approach achieves state-of-the-art performance on all downstream tasks. Quantitatively, our approach obtains F1 scores of 0.92 for clinical role prediction and 0.95 for surgical phase recognition. Furthermore, it significantly outperforms existing baselines in team communication detection, improving previous best performances by over 30%. Conclusion: We introduce gaze-following in the OR as a novel research direction in surgical data science, highlighting its great potential to advance surgical workflow analysis in computer-assisted interventions.