MiMIC: Mitigating Visual Modality Collapse in Universal Multimodal Retrieval While Avoiding Semantic Misalignment

arXiv cs.CV / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies Universal Multimodal Retrieval (UMR), which aligns different modalities (e.g., images and text) into a shared embedding space for cross-modal search.
  • It finds that common early-fusion methods like Marvel can suffer from visual modality collapse—over-relying on text and effectively ignoring visual features.
  • It also shows that late-fusion methods such as UniVL-DR are comparatively robust to this collapse but can experience semantic misalignment, where meaningfully related items end up far apart in the embedding space.
  • To mitigate both problems, the authors propose MiMIC, using a fusion-in-decoder architecture plus training strategies including single-modality mixin and random caption dropout.
  • Experiments on WebQA+ and EVQA+ demonstrate that MiMIC outperforms both early- and late-fusion baselines, especially in settings where images may lack captions in documents or queries.

Abstract

Universal Multimodal Retrieval (UMR) aims to map different modalities (e.g., visual and textual) into a shared embedding space for multi-modal retrieval. Existing UMR methods can be broadly divided into two categories: early-fusion approaches, such as Marvel, which projects visual features into the language model (LM) space for integrating with text modality, and late-fusion approaches, such as UniVL-DR, which encode visual and textual inputs using separate encoders and obtain fused embeddings through addition. Our pilot study reveals that Marvel exhibits visual modality collapse, which is characterized by the model's tendency to disregard visual features while depending excessively on textual cues. In contrast, although UniVL-DR is less affected by this issue, it is more susceptible to semantic misalignment, where semantically related content is positioned far apart in the embedding space. To address these challenges, we propose MiMIC, which introduces two key innovations: (1) a fusion-in-decoder architecture for effective multimodal integration, and (2) robust training through single modality mixin and random caption dropout. Experiments on the WebQA+ and EVQA+ datasets, where image in documents or queries might lack captions, indicate that MiMIC consistently outperforms both early- and late-fusion baselines.