Label What Matters: Modality-Balanced and Difficulty-Aware Multimodal Active Learning

arXiv cs.CV / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces RL-MBA, a reinforcement-learning framework for multimodal active learning that accounts for both changing modality value and shifting instance difficulty across training rounds.
  • It formulates sample selection as a Markov Decision Process, using a policy that adapts based on modality contributions, uncertainty, and diversity, with rewards tied to accuracy improvement and modality balance.
  • RL-MBA’s Adaptive Modality Contribution Balancing (AMCB) dynamically reweights modalities using reinforcement feedback rather than assuming fixed importance.
  • Its Evidential Fusion for Difficulty-Aware Policy Adjustment (EFDA) estimates sample difficulty via uncertainty-based evidential fusion to prioritize genuinely informative samples.
  • Experiments on Food101, KineticsSound, and VGGSound show consistent gains over strong baselines, improving classification accuracy while improving modality fairness under limited labeling budgets.

Abstract

Multimodal learning integrates complementary information from different modalities such as image, text, and audio to improve model performance, but its success relies on large-scale labeled data, which is costly to obtain. Active learning (AL) mitigates this challenge by selectively annotating informative samples. In multimodal settings, many approaches implicitly assume that modality importance is stable across rounds and keep selection rules fixed at the fusion stage, which leaves them insensitive to the dynamic nature of multimodal learning, where the relative value of modalities and the difficulty of instances shift as training proceeds. To address this issue, we propose RL-MBA, a reinforcement-learning framework for modality-balanced, difficulty-aware multimodal active learning. RL-MBA models sample selection as a Markov Decision Process, where the policy adapts to modality contributions, uncertainty, and diversity, and the reward encourages accuracy gains and balance. Two key components drive this adaptability: (1) Adaptive Modality Contribution Balancing (AMCB), which dynamically adjusts modality weights via reinforcement feedback, and (2) Evidential Fusion for DifficultyAware Policy Adjustment (EFDA), which estimates sample difficulty via uncertainty-based evidential fusion to prioritize informative samples. Experiments on Food101, KineticsSound, and VGGSound demonstrate that RL-MBA consistently outperforms strong baselines, improving both classification accuracy and modality fairness under limited labeling budgets.