Complementarity-Supervised Spectral-Band Routing for Multimodal Emotion Recognition
arXiv cs.CV / 3/17/2026
💬 OpinionModels & Research
Key Points
- The paper argues that prior multimodal emotion recognition methods rely on independent unimodal performance and use coarse-grained fusion, hindering cross-modal synergy.
- It proposes Atsuko, the Complementarity-Supervised Multi-Band Expert Network, which decomposes each modality into high-, mid-, and low-frequency components for fine-grained feature modeling.
- Atsuko introduces a modality-level router with a dual-path mechanism to enable fine-grained cross-band selection and cross-modal fusion.
- The Marginal Complementarity Module quantifies the performance loss from removing each modality via bi-modal comparison and provides soft supervision to guide the router toward unique information gains.
- Experiments on CMU-MOSI, CMU-MOSEI, CH-SIMS, CH-SIMSv2, and MIntRec demonstrate superior performance, validating the effectiveness of the approach.
Related Articles
Does Synthetic Data Generation of LLMs Help Clinical Text Mining?
Dev.to
The Dawn of the Local AI Era: From iPhone 17 Pro to the Future of NVIDIA RTX
Dev.to
[P] Prompt optimization for analog circuit placement — 97% of expert quality, zero training data
Reddit r/MachineLearning
[R] Looking for arXiv endorser (cs.AI or cs.LG)
Reddit r/MachineLearning

I curated an 'Awesome List' for Generative AI in Jewelry- papers, datasets, open-source models and tools included!
Reddit r/artificial