Why Do Vision Language Models Struggle To Recognize Human Emotions?

arXiv cs.CV / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates why vision-language models (VLMs) often fail to recognize human emotions, noting they may not outperform specialized vision-only facial-expression classifiers.
  • It finds two key vulnerabilities: emotion datasets are long-tailed, and VLM pretraining on web-scale data can create head-class bias that collapses rare emotions into common ones.
  • To address the dataset bias, the authors propose alternative sampling strategies designed to avoid over-representing common concepts.
  • It also highlights that emotion understanding depends heavily on temporal dynamics, but VLMs struggle with dense frame sequences due to context-length and memory/token limits—especially problematic for micro-expressions lasting about 0.25–0.5 seconds.
  • The authors propose a multi-stage context enrichment approach that summarizes intermediate frames into natural-language descriptions and feeds this enriched text along with sparse keyframes to better preserve the emotion trajectory.

Abstract

Understanding emotions is a fundamental ability for intelligent systems to be able to interact with humans. Vision-language models (VLMs) have made tremendous progress in the last few years for many visual tasks, potentially offering a promising solution for understanding emotions. However, it is surprising that even the most sophisticated contemporary VLMs struggle to recognize human emotions or to outperform even specialized vision-only classifiers. In this paper we ask the question "Why do VLMs struggle to recognize human emotions?", and observe that the inherently continuous and dynamic task of facial expression recognition (DFER) exposes two critical VLM vulnerabilities. First, emotion datasets are naturally long-tailed, and the web-scale data used to pre-train VLMs exacerbates this head-class bias, causing them to systematically collapse rare, under-represented emotions into common categories. We propose alternative sampling strategies that prevent favoring common concepts. Second, temporal information is critical for understanding emotions. However, VLMs are unable to represent temporal information over dense frame sequences, as they are limited by context size and the number of tokens that can fit in memory, which poses a clear challenge for emotion recognition. We demonstrate that the sparse temporal sampling strategy used in VLMs is inherently misaligned with the fleeting nature of micro-expressions (0.25-0.5 seconds), which are often the most critical affective signal. As a diagnostic probe, we propose a multi-stage context enrichment strategy that utilizes the information from "in-between" frames by first converting them into natural language summaries. This enriched textual context is provided as input to the VLM alongside sparse keyframes, preventing attentional dilution from excessive visual data while preserving the emotional trajectory.