AI Navigate

ConfusionBench: An Expert-Validated Benchmark for Confusion Recognition and Localization in Educational Videos

arXiv cs.CV / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ConfusionBench is a new benchmark for educational videos that targets confusion recognition and localization, addressing issues in existing datasets like noisy labels and weak validation.
  • The project introduces a multi-stage filtering pipeline combining model-assisted screening, researcher curation, and expert validation to produce higher-quality data.
  • The benchmark includes a balanced confusion recognition dataset and a video localization dataset, plus zero-shot evaluations showing differences between a proprietary model and an open-source model.
  • Results show the proprietary model performs better overall but tends to over-predict transitional segments, while the open-source model is more conservative and can miss detections.
  • A student confusion report visualization is proposed to help educational experts decide interventions and learning plan adaptations, with all datasets publicly available on the project page.

Abstract

Recognizing and localizing student confusion from video is an important yet challenging problem in educational AI. Existing confusion datasets suffer from noisy labels, coarse temporal annotations, and limited expert validation, which hinder reliable fine-grained recognition and temporally grounded analysis. To address these limitations, we propose a practical multi-stage filtering pipeline that integrates two stages of model-assisted screening, researcher curation, and expert validation to build a higher-quality benchmark for confusion understanding. Based on this pipeline, we introduce ConfusionBench, a new benchmark for educational videos consisting of a balanced confusion recognition dataset and a video localization dataset. We further provide zero-shot baseline evaluations of a representative open-source model and a proprietary model on clip-level confusion recognition, long-video confusion localization tasks. Experimental results show that the proprietary model performs better overall but tends to over-predict transitional segments, while the open-source model is more conservative and more prone to missed detections. In addition, the proposed student confusion report visualization can support educational experts in making intervention decisions and adapting learning plans accordingly. All datasets and related materials will be made publicly available on our project page.