CodeMMR: Bridging Natural Language, Code, and Image for Unified Retrieval

arXiv cs.AI / 4/20/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that code search as IR—and especially code retrieval used in RAG—has been mostly text-centric, leaving out important visual/structural elements of real programming artifacts.
  • It introduces MMCoIR, a new benchmark for multimodal code IR spanning five visual domains, eight programming languages, and eleven libraries, along with extensive evaluation to highlight the task’s difficulty.
  • The authors propose CodeMMR, a unified retrieval model that jointly embeds natural language, code, and images into a shared semantic space using instruction-based multimodal alignment.
  • CodeMMR shows strong cross-modality and cross-language generalization, outperforming several baselines by about 10 points on nDCG@10, and improves RAG by increasing generation fidelity and visual grounding on unseen tasks.
  • The work provides datasets via Hugging Face to support further research and development in multimodal retrieval for programming systems.

Abstract

Code search, framed as information retrieval (IR), underpins modern software engineering and increasingly powers retrieval-augmented generation (RAG), improving code discovery, reuse, and the reliability of LLM-based coding. Yet existing code IR models remain largely text-centric and often overlook the visual and structural aspects inherent in programming artifacts such as web interfaces, data visualizations, SVGs, schematic diagrams, and UML. To bridge this gap, we introduce MMCoIR, the first comprehensive benchmark for evaluating multimodal code IR across five visual domains, eight programming languages, eleven libraries, and show the challenge of the task through extensive evaluation. Therefore, we then propose CodeMMR, a unified retrieval model that jointly embeds natural language, code, and images into a shared semantic space through instruction-based multimodal alignment. CodeMMR achieves strong generalization across modalities and languages, outperforming competitive baselines (e.g., UniIR, GME, VLM2Vec) by an average of 10 points on nDCG@10. Moreover, integrating CodeMMR into RAG enhances code generation fidelity and visual grounding on unseen code generation tasks, underscoring the potential of multimodal retrieval as a core enabler for next-generation intelligent programming systems. Datasets are available at HuggingFace.