AI Navigate

Nuanced Emotion Recognition Based on a Segment-based MLLM Framework Leveraging Qwen3-Omni for AH Detection

arXiv cs.CV / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • This paper proposes a segment-based framework that combines temporal segmentation of videos (up to 5 seconds per clip) with Multimodal Large Language Models to improve detection of nuanced emotions like Ambivalence and Hesitancy.
  • The method leverages Qwen3-Omni-30B-A3B, fine-tuned on the BAH dataset with LoRA and full-parameter updates via MS-Swift, enabling integrated analysis of visual, audio, and textual cues.
  • Experiments report 85.1% accuracy on the test set and show significant improvements over existing benchmarks, highlighting the ability of multimodal LLMs to capture cross-modal emotional conflicts.
  • The work provides an open-source release (GitHub) and points to applications in affective computing and digital health.

Abstract

Emotion recognition in videos is a pivotal task in affective computing, where identifying subtle psychological states such as Ambivalence and Hesitancy holds significant value for behavioral intervention and digital health. Ambivalence and Hesitancy states often manifest through cross-modal inconsistencies such as discrepancies between facial expressions, vocal tones, and textual semantics, posing a substantial challenge for automated recognition. This paper proposes a recognition framework that integrates temporal segment modeling with Multimodal Large Language Models. To address computational efficiency and token constraints in long video processing, we employ a segment-based strategy, partitioning videos into short clips with a maximum duration of 5 seconds. We leverage the Qwen3-Omni-30B-A3B model, fine-tuned on the BAH dataset using LoRA and full-parameter strategies via the MS-Swift framework, enabling the model to synergistically analyze visual and auditory signals. Experimental results demonstrate that the proposed method achieves an accuracy of 85.1% on the test set, significantly outperforming existing benchmarks and validating the superior capability of Multimodal Large Language Models in capturing complex and nuanced emotional conflicts. The code is released at https://github.com/dlnn123/A-H-Detection-with-Qwen-Omni.git.