MCoT-MVS: Multi-level Vision Selection by Multi-modal Chain-of-Thought Reasoning for Composed Image Retrieval
arXiv cs.CV / 3/19/2026
📰 NewsModels & Research
Key Points
- The paper introduces MCoT-MVS, a multi-level vision selection framework for Composed Image Retrieval (CIR) that leverages multi-modal chain-of-thought reasoning from a large language model to guide vision-text understanding.
- It uses reasoning cues to generate retained, removed, and target-inferred texts, which in turn guide two reference visual attention modules to extract discriminative patch-level and instance-level semantics from the reference image.
- A weighted hierarchical fusion module then combines these multi-granular visual cues with the modified text and imagined target description to align the query with target images in a unified embedding space.
- The method achieves state-of-the-art results on CIRR and FashionIQ benchmarks, and the authors publicly release code and trained models.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA