AI Navigate

Complementarity-Supervised Spectral-Band Routing for Multimodal Emotion Recognition

arXiv cs.CV / 3/17/2026

💬 OpinionModels & Research

Key Points

  • The paper argues that prior multimodal emotion recognition methods rely on independent unimodal performance and use coarse-grained fusion, hindering cross-modal synergy.
  • It proposes Atsuko, the Complementarity-Supervised Multi-Band Expert Network, which decomposes each modality into high-, mid-, and low-frequency components for fine-grained feature modeling.
  • Atsuko introduces a modality-level router with a dual-path mechanism to enable fine-grained cross-band selection and cross-modal fusion.
  • The Marginal Complementarity Module quantifies the performance loss from removing each modality via bi-modal comparison and provides soft supervision to guide the router toward unique information gains.
  • Experiments on CMU-MOSI, CMU-MOSEI, CH-SIMS, CH-SIMSv2, and MIntRec demonstrate superior performance, validating the effectiveness of the approach.

Abstract

Multimodal emotion recognition fuses cues such as text, video, and audio to understand individual emotional states. Prior methods face two main limitations: mechanically relying on independent unimodal performance, thereby missing genuine complementary contributions, and coarse-grained fusion conflicting with the fine-grained representations required by emotion tasks. As inconsistent information density across heterogeneous modalities hinders inter-modal feature mining, we propose the Complementarity-Supervised Multi-Band Expert Network, named Atsuko, to model fine-grained complementary features via multi-scale band decomposition and expert collaboration. Specifically, we orthogonally decompose each modality's features into high, mid, and low-frequency components. Building upon this band-level routing, we design a modality-level router with a dual-path mechanism for fine-grained cross-band selection and cross-modal fusion. To mitigate shortcut learning from dominant modalities, we propose the Marginal Complementarity Module (MCM) to quantify performance loss when removing each modality via bi-modal comparison. The resulting complementarity distribution provides soft supervision, guiding the router to focus on modalities contributing unique information gains. Extensive experiments show our method achieves superior performance on the CMU-MOSI, CMU-MOSEI, CH-SIMS, CH-SIMSv2, and MIntRec benchmarks.