Facial-Expression-Aware Prompting for Empathetic LLM Tutoring

arXiv cs.AI / 4/20/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study explores whether adding facial-expression-aware signals to LLM prompts can improve the empathy and effectiveness of tutoring agents beyond what text-only inputs provide.
  • Researchers built a simulated tutoring setup using facial behaviors generated from a large unlabeled facial-expression video dataset and tested four tutor variants against a text-only baseline.
  • Results from 960 multi-turn conversations across multiple LLM backbones show that Action Unit (AU)-based conditioning reliably boosts empathetic responsiveness, and selecting a peak-expression frame outperforms using a random facial frame.
  • The gains do not appear to reduce pedagogical clarity or responsiveness to textual cues, and AI-human agreement is highest when empathy is grounded in facial expressions.
  • The paper concludes that lightweight, structured facial-expression representations can enhance LLM tutoring with minimal overhead, avoiding end-to-end retraining.

Abstract

Large language models (LLMs) enable increasingly capable tutoring-style conversational agents, yet effective tutoring requires sensitivity to learners' affective and cognitive states beyond text alone. Facial expressions provide immediate and practical cues of confusion, frustration, or engagement, but remain underexplored in LLM-driven tutoring. We investigate whether facial-expression-aware signals can improve empathetic tutoring responses through prompt-level integration, without end-to-end retraining. We build a scalable simulated tutoring environment where a student agent exhibits diverse facial behaviors from a large unlabeled facial expression video dataset, and compare four tutor variants: a text-only LLM baseline, a multimodal baseline using a random facial frame, and two Action Unit estimation model (AUM)-based methods that either inject textual AU descriptions or select a peak-expression frame for visual grounding. Across 960 multi-turn conversations spanning three tutor backbones (GPT-5.1, Claude Ops 4.5, and Gemini 2.5 Pro), we evaluate targeted pairwise comparisons with five human raters and an exhaustive AI evaluator. AU-based conditioning consistently improves empathetic responsiveness to facial expressions across all tutor backbones, while AUM-guided peak-frame selection outperforms random-frame visual input. Textual AU abstraction and peak-frame visual injection show model-dependent advantages. Control analyses show that this improvement does not come at the expense of worse pedagogical clarity or responsiveness to textual cues. Finally, AI-human agreement is highest on facial-expression-grounded empathy, supporting scalable AI evaluation for this dimension. Overall, our results show that lightweight, structured facial expression representations can meaningfully enhance empathy in LLM-based tutoring systems with minimal overhead.