Evaluation Before Generation: A Paradigm for Robust Multimodal Sentiment Analysis with Missing Modalities

arXiv cs.CV / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles multimodal sentiment analysis performance loss caused by missing modalities in real-world settings, proposing a more rigorous and evaluation-driven approach than prior prompt-learning methods.
  • It introduces a Missing Modality Evaluator that uses pretrained models and pseudo labels at the input stage to determine how important each missing modality is, reducing reliance on low-quality modality imputation.
  • The framework uses modality-invariant prompt disentanglement to separate shared prompts into modality-specific private prompts, aiming to better capture local correlations.
  • It adaptively suppresses interference from missing modalities via dynamic prompt weighting computed from mutual information derived from cross-attention outputs.
  • Experiments on CMU MOSI, CMU MOSEI, and CH SIMS show state-of-the-art results with stable behavior across varied missing-modality scenarios, with code released on GitHub.

Abstract

The missing modality problem poses a fundamental challenge in multimodal sentiment analysis, significantly degrading model accuracy and generalization in real world scenarios. Existing approaches primarily improve robustness through prompt learning and pre trained models. However, two limitations remain. First, the necessity of generating missing modalities lacks rigorous evaluation. Second, the structural dependencies among multimodal prompts and their global coherence are insufficiently explored. To address these issues, a Prompt based Missing Modality Adaptation framework is proposed. A Missing Modality Evaluator is introduced at the input stage to dynamically assess the importance of missing modalities using pretrained models and pseudo labels, thereby avoiding low quality data imputation. Building on this, a Modality invariant Prompt Disentanglement module decomposes shared prompts into modality specific private prompts to capture intrinsic local correlations and improve representation quality. In addition, a Dynamic Prompt Weighting module computes mutual information based weights from cross attention outputs to adaptively suppress interference from missing modalities. To enhance global consistency, a Multi level Prompt Dynamic Connection module integrates shared prompts with self attention outputs through residual connections, leveraging global prompt priors to strengthen key guidance features. Extensive experiments on three public benchmarks, including CMU MOSI, CMU MOSEI, and CH SIMS, demonstrate that the proposed framework achieves state of the art performance and stable results under diverse missing modality settings. The implementation is available at https://github.com/rongfei-chen/ProMMA