Predicting States of Understanding in Explanatory Interactions Using Cognitive Load-Related Linguistic Cues
arXiv cs.CL / 3/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates how cognitive-load-related linguistic cues—surprisal, syntactic complexity, and listener gaze variation—relate to a listener's moment-by-moment understanding in explanatory dialogue.
- It analyzes the MUNDEX corpus with self-annotated listener states (Understanding, Partial Understanding, Non-Understanding, Misunderstanding) via retrospective video recall.
- A classification study using two off-the-shelf classifiers and a fine-tuned German BERT-based multimodal classifier demonstrates that four-state understanding can be predicted, with improvements when combining linguistic cues with textual features.
- The results indicate that each cue contributes differently to the listener's state and that integrating multiple cues yields better predictive performance, suggesting potential for real-time adaptation in educational or conversational systems.
Related Articles
How AI is Transforming Dynamics 365 Business Central
Dev.to
Algorithmic Gaslighting: A Formal Legal Template to Fight AI Safety Pivots That Cause Psychological Harm
Reddit r/artificial
Do I need different approaches for different types of business information errors?
Dev.to
ShieldCortex: What We Learned Protecting AI Agent Memory
Dev.to
How AI-Powered Revenue Intelligence Transforms B2B Sales Teams
Dev.to