ActFER: Agentic Facial Expression Recognition via Active Tool-Augmented Visual Reasoning

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ActFER, an agentic multimodal framework for facial expression recognition that shifts from passive label prediction to active visual evidence acquisition and reasoning.
  • ActFER dynamically uses tools for face detection and alignment, zooms into informative local regions, and performs multimodal reasoning over facial Action Units (AUs) and emotions via a visual Chain-of-Thought approach.
  • It proposes UC-GRPO, a reinforcement learning algorithm designed for agentic FER, incorporating AU-grounded verifiable rewards, query-conditional contrastive utility estimation for dynamic credit assignment, and emotion-aware EMA calibration to reduce noisy utility.
  • Experiments report that ActFER trained with UC-GRPO outperforms passive MLLM-based FER baselines and improves AU prediction accuracy, indicating the benefit of learnable “when/where to inspect” behavior.

Abstract

Recent advances in Multimodal Large Language Models (MLLMs) have created new opportunities for facial expression recognition (FER), moving it beyond pure label prediction toward reasoning-based affect understanding. However, existing MLLM-based FER methods still follow a passive paradigm: they rely on externally prepared facial inputs and perform single-pass reasoning over fixed visual evidence, without the capability for active facial perception. To address this limitation, we propose ActFER, an agentic framework that reformulates FER as active visual evidence acquisition followed by multimodal reasoning. Specifically, ActFER dynamically invokes tools for face detection and alignment, selectively zooms into informative local regions, and reasons over facial Action Units (AUs) and emotions through a visual Chain-of-Thought. To realize such behavior, we further develop Utility-Calibrated GRPO (UC-GRPO), a reinforcement learning algorithm tailored to agentic FER. UC-GRPO uses AU-grounded multi-level verifiable rewards to densify supervision, query-conditional contrastive utility estimation to enable sample-aware dynamic credit assignment for local inspection, and emotion-aware EMA calibration to reduce noisy utility estimates while capturing emotion-wise inspection tendencies. This algorithm enables ActFER to learn both when local inspection is beneficial and how to reason over the acquired evidence. Comprehensive experiments show that ActFER trained with UC-GRPO consistently outperforms passive MLLM-based FER baselines and substantially improves AU prediction accuracy.