Rapidly deploying on-device eye tracking by distilling visual foundation models

arXiv cs.CV / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes DistillGaze, a framework for rapidly deploying accurate on-device gaze estimation for AR/VR despite device-to-device hardware and illumination changes.
  • It addresses a key limitation of off-the-shelf visual foundation models on specialized near-eye infrared imagery by creating a domain-specialized teacher with self-supervised learning using both labeled synthetic and unlabeled real data.
  • DistillGaze then trains a lightweight on-device student model via teacher guidance plus self-training to close the synthetic-to-real domain gap.
  • On a large crowd-sourced dataset with 2,000+ participants, DistillGaze cuts median gaze error by 58.62% versus synthetic-only baselines while keeping the model small (256K parameters) for real-time deployment.

Abstract

Eye tracking (ET) plays a critical role in augmented and virtual reality applications. However, rapidly deploying high-accuracy, on-device gaze estimation for new products remains challenging because hardware configurations (e.g., camera placement, camera pose, and illumination) often change across device generations. Visual foundation models (VFMs) are a promising direction for rapid training and deployment, and they excel on natural-image benchmarks; yet we find that off-the-shelf VFMs still struggle to achieve high accuracy on specialized near-eye infrared imagery. To address this gap, we introduce DistillGaze, a framework that distills a foundation model by leveraging labeled synthetic data and unlabeled real data for rapid and high-performance on-device gaze estimation. DistillGaze proceeds in two stages. First, we adapt a VFM into a domain-specialized teacher using self-supervised learning on labeled synthetic and unlabeled real images. Synthetic data provides scalable, high-quality gaze supervision, while unlabeled real data helps bridge the synthetic-to-real domain gap. Second, we train an on-device student using both teacher guidance and self-training. Evaluated on a large-scale, crowd-sourced dataset spanning over 2,000 participants, DistillGaze reduces median gaze error by 58.62% relative to synthetic-only baselines while maintaining a lightweight 256K-parameter model suitable for real-time on-device deployment. Overall, DistillGaze provides an efficient pathway for training and deploying ET models that adapt to hardware changes, and offers a recipe for combining synthetic supervision with unlabeled real data in on-device regression tasks.