EgoMAGIC- An Egocentric Video Field Medicine Dataset for Training Perception Algorithms

arXiv cs.AI / 4/27/2026

💬 OpinionSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper introduces EgoMAGIC, an egocentric medical activity video dataset collected under DARPA’s Perceptually-enabled Task Guidance (PTG) program to support AR-assisted virtual task guidance.
  • EgoMAGIC includes 3,355 videos covering 50 medical tasks, with at least 50 labeled videos per task, and most videos recorded with a head-mounted stereo camera and audio.
  • The authors also release medical training data and a focused action detection challenge covering eight medical tasks, lowering the barrier for research and experimentation.
  • Using this dataset, the team trained 40 YOLO models with 1.95 million labels to detect 124 medical objects, and reports baseline action-detection results with a best average mAP of 0.526.
  • Beyond action detection, the dataset is positioned as useful for other computer-vision tasks such as action recognition, object identification/detection, and error detection.

Abstract

This paper introduces EgoMAGIC (Medical Assistance, Guidance, Instruction, and Correction), an egocentric medical activity dataset collected as part of DARPA's Perceptually-enabled Task Guidance (PTG) program. This dataset comprises 3,355 videos of 50 medical tasks, with at least 50 labeled videos per task. The primary objective of the PTG program was to develop virtual assistants integrated into augmented reality headsets to assist users in performing complex tasks. To encourage exploration and research using this dataset, the medical training data has been released along with an action detection challenge focused on eight medical tasks. The majority of the videos were recorded using a head-mounted stereo camera with integrated audio. From this dataset, 40 YOLO models were trained using 1.95 million labels to detect 124 medical objects, providing a robust starting point for developers working on medical AI applications. In addition to introducing the dataset, this paper presents baseline results on action detection for the eight selected medical tasks across three models, with the best-performing method achieving average mAP 0.526. Although this paper primarily addresses action detection as the benchmark, the EgoMAGIC dataset is equally suitable for action recognition, object identification and detection, error detection, and other challenging computer vision tasks. The dataset is accessible via zenodo.org (DOI: 10.5281/zenodo.19239154).