Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models

arXiv cs.AI / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a scalable, explainable pipeline to predict learners’ video control behaviors (watching, pausing, skipping, rewinding) as proxies for cognitive load before educational content is deployed.
  • It uses multimodal large language model (MLLM) embeddings of short video segments, then trains a neural classifier to detect temporally fine-grained “interaction peaks.”
  • To enable interpretability, it extracts GPT-5-coded segment features and applies concept activation vectors so that predicted peaks can be mapped to theory-relevant instructional concepts.
  • The evaluation uses a large dataset of 77 million video control events across 66 online courses, showing strong predictive performance, generalization to unseen academic fields, and interpretable learned concepts.
  • The authors argue the approach supports cost-efficient pre-screening of video design quality and enables large-scale empirical testing of multimedia learning theory.

Abstract

Learners' use of video controls in educational videos provides implicit signals of cognitive processing and instructional design quality, yet the lack of scalable and explainable predictive models limits instructors' ability to anticipate such behavior before deployment. We propose a scalable, interpretable pipeline for predicting population-level watching, pausing, skipping, and rewinding behavior as proxies for cognitive load from video content alone. Our approach leverages multimodal large language models (MLLMs) to compute embeddings of short video segments and trains a neural classifier to identify temporally fine-grained interaction peaks. Drawing from multimedia learning theory on instructional design for optimal cognitive load, we code features of the video segments using GPT-5 and employ them as a basis for interpreting model predictions via concept activation vectors. We evaluate our pipeline on 77 million video control events from 66 online courses. Our findings demonstrate that classifiers based on MLLM embeddings reliably predict interaction peaks, generalize to unseen academic fields, and encode interpretable, theory-relevant instructional concepts. Overall, our results show the feasibility of cost-efficient, interpretable pre-screening of educational video design and open new opportunities to empirically examine multimedia learning theory at scale.