Gaze to Insight: A Scalable AI Approach for Detecting Gaze Behaviours in Face-to-Face Collaborative Learning

arXiv cs.CV / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses limitations of prior gaze-detection research in collaborative learning, especially the need for large labeled datasets and concerns about cross-configuration robustness in educational settings.
  • It proposes a scalable pipeline that uses foundation/pretrained models—YOLO11 for person tracking, YOLOE-26 with text prompting for education-related object detection, and Gaze-LLE for gaze target prediction.
  • The approach is designed to detect students’ gaze behaviors from video without requiring human-annotated training data, aiming to reduce annotation burden in real deployments.
  • Experiments report an F1-score of 0.829, with strong performance for laptop-directed and peer-directed gaze, while gaze targets other than these show weaker detection accuracy.
  • Compared with supervised baselines, the method shows superior and more stable performance in complex contexts, suggesting improved robustness when facing varied classroom configurations.

Abstract

Previous studies have illustrated the potential of analysing gaze behaviours in collaborative learning to provide educationally meaningful information for students to reflect on their learning. Over the past decades, machine learning approaches have been developed to automatically detect gaze behaviours from video data. Yet, since these approaches often require large amounts of labelled data for training, human annotation remains necessary. Additionally, researchers have questioned the cross-configuration robustness of machine learning models developed, as training datasets often fail to encompass the full range of situations encountered in educational contexts. To address these challenges, this study proposes a scalable artificial intelligence approach that leverages pretrained and foundation models to automatically detect gaze behaviours in face-to-face collaborative learning contexts without requiring human-annotated data. The approach utilises pretrained YOLO11 for person tracking, YOLOE-26 with text-prompt capability for education-related object detection, and the Gaze-LLE model for gaze target prediction. The results indicate that the proposed approach achieves an F1-score of 0.829 in detecting students' gaze behaviours from video data, with strong performance for laptop-directed gaze and peer-directed gaze, yet weaker performance for other gaze targets. Furthermore, when compared to other supervised machine learning approaches, the proposed method demonstrates superior and more stable performance in complex contexts, highlighting its better cross-configuration robustness. The implications of this approach for supporting students' collaborative learning in real-world environments are also discussed.