MiMIC: Mitigating Visual Modality Collapse in Universal Multimodal Retrieval While Avoiding Semantic Misalignment
arXiv cs.CV / 4/24/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies Universal Multimodal Retrieval (UMR), which aligns different modalities (e.g., images and text) into a shared embedding space for cross-modal search.
- It finds that common early-fusion methods like Marvel can suffer from visual modality collapse—over-relying on text and effectively ignoring visual features.
- It also shows that late-fusion methods such as UniVL-DR are comparatively robust to this collapse but can experience semantic misalignment, where meaningfully related items end up far apart in the embedding space.
- To mitigate both problems, the authors propose MiMIC, using a fusion-in-decoder architecture plus training strategies including single-modality mixin and random caption dropout.
- Experiments on WebQA+ and EVQA+ demonstrate that MiMIC outperforms both early- and late-fusion baselines, especially in settings where images may lack captions in documents or queries.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA