AI Navigate

Distance-aware Soft Prompt Learning for Multimodal Valence-Arousal Estimation

arXiv cs.CV / 3/17/2026

📰 NewsModels & Research

Key Points

  • The paper introduces Distance-aware Soft Prompt Learning to bridge semantic space and continuous valence-arousal dimensions for multimodal estimation.
  • It partitions the VA space into a 3x3 grid of nine emotional regions and uses a Gaussian kernel to assign soft labels based on distance to region centers, enabling fine-grained emotional transitions rather than hard categories.
  • The architecture combines CLIP image encoder and Audio Spectrogram Transformer (AST) for multimodal features, uses GRUs for temporal modeling, and employs a hierarchical fusion with cross-modal attention and gated refinement.
  • On the Aff-Wild2 dataset, the approach achieves competitive accuracy in unconstrained in-the-wild scenarios, demonstrating the effectiveness of the semantic-guided method.

Abstract

Valence-arousal (VA) estimation is crucial for capturing the nuanced nature of human emotions in naturalistic environments. While pre-trained Vision-Language models like CLIP have shown remarkable semantic alignment capabilities, their application in continuous regression tasks is often limited by the discrete nature of text prompts. In this paper, we propose a novel multimodal framework for VA estimation that introduces Distance-aware Soft Prompt Learning to bridge the gap between semantic space and continuous dimensions. Specifically, we partition the VA space into a 3X3 grid, defining nine emotional regions, each associated with distinct textual descriptions. Rather than a hard categorization, we employ a Gaussian kernel to compute soft labels based on the Euclidean distance between the ground truth coordinates and the region centers, allowing the model to learn fine-grained emotional transitions. For multimodal integration, our architecture utilizes a CLIP image encoder and an Audio Spectrogram Transformer (AST) to extract robust spatial and acoustic features. These features are temporally modeled via Gated Recurrent Units (GRUs) and integrated through a hierarchical fusion scheme that sequentially combines cross-modal attention for alignment and gated fusion for adaptive refinement. Experimental results on the Aff-Wild2 dataset demonstrate that our proposed semantic-guided approach significantly enhances the accuracy of VA estimation, achieving competitive performance in unconstrained ``in-the-wild'' scenarios.