AMB-DSGDN: Adaptive Modality-Balanced Dynamic Semantic Graph Differential Network for Multimodal Emotion Recognition
arXiv cs.AI / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Adaptive Modality-Balanced Dynamic Semantic Graph Differential Network (AMB-DSGDN) for multimodal dialogue emotion recognition using text, speech, and vision modalities.
- It builds modality-specific subgraphs with intra-speaker and inter-speaker connections to capture self-continuity and cross-speaker emotional dependencies.
- It introduces a differential graph attention mechanism that contrasts two attention maps to cancel shared noise while preserving modality-specific and context-relevant signals.
- It includes an adaptive modality balancing mechanism that estimates a dropout probability for each modality based on its relative contribution to emotion modeling, reducing dominance of any single modality.




