AI Navigate

Solution for 10th Competition on Ambivalence/Hesitancy (AH) Video Recognition Challenge using Divergence-Based Multimodal Fusion

arXiv cs.CV / 3/19/2026

💬 OpinionModels & Research

Key Points

  • The paper proposes a divergence-based multimodal fusion to address Ambivalence/Hesitancy video recognition in the ABAW CVPR 2026 competition by explicitly modeling cross-modal conflict among visual, audio, and text streams.
  • Visual features are encoded as Action Units (AUs) via Py-Feat, audio via Wav2Vec 2.0, and text via BERT, with each modality processed by a BiLSTM and attention pooling to produce a shared embedding.
  • The fusion module uses pairwise absolute differences between modality embeddings to capture cross-modal incongruence that characterizes A/H.
  • On the BAH dataset, the method achieves a Macro F1 of 0.6808 on the validation set, outperforming the baseline of 0.2827.
  • Statistical analysis across 1,132 videos identifies temporal variability of AUs as the dominant visual discriminator of Ambivalence/Hesitancy.

Abstract

We address the Ambivalence/Hesitancy (A/H) Video Recognition Challenge at the 10th ABAW Competition (CVPR 2026). We propose a divergence-based multimodal fusion that explicitly measures cross-modal conflict between visual, audio, and textual channels. Visual features are encoded as Action Units (AUs) extracted via Py-Feat, audio via Wav2Vec 2.0, and text via BERT. Each modality is processed by a BiLSTM with attention pooling and projected into a shared embedding space. The fusion module computes pairwise absolute differences between modality embeddings, directly capturing the incongruence that characterizes A/H. On the BAH dataset, our approach achieves a Macro F1 of 0.6808 on the validation test set, outperforming the challenge baseline of 0.2827. Statistical analysis across 1{,}132 videos confirms that temporal variability of AUs is the dominant visual discriminator of A/H.