CodeMMR: Bridging Natural Language, Code, and Image for Unified Retrieval
arXiv cs.AI / 4/20/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that code search as IR—and especially code retrieval used in RAG—has been mostly text-centric, leaving out important visual/structural elements of real programming artifacts.
- It introduces MMCoIR, a new benchmark for multimodal code IR spanning five visual domains, eight programming languages, and eleven libraries, along with extensive evaluation to highlight the task’s difficulty.
- The authors propose CodeMMR, a unified retrieval model that jointly embeds natural language, code, and images into a shared semantic space using instruction-based multimodal alignment.
- CodeMMR shows strong cross-modality and cross-language generalization, outperforming several baselines by about 10 points on nDCG@10, and improves RAG by increasing generation fidelity and visual grounding on unseen tasks.
- The work provides datasets via Hugging Face to support further research and development in multimodal retrieval for programming systems.
Related Articles
v0.20.0rc1
vLLM Releases

How to Learn Claude AI from Scratch (Step-by-Step Guide)
Dev.to

Biotech-led boom as 8 China firms flock to Hong Kong’s thriving stock market
SCMP Tech
I built my own event bus for a sustainability app — here's what I learned about agent automation using OpenClaw
Dev.to
LLMs Don't Fail — Execution Does: Why Agentic AI Needs a Control Layer
Dev.to