Beyond Chain-of-Thought: Rewrite as a Universal Interface for Generative Multimodal Embeddings

arXiv cs.CV / 4/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that using chain-of-thought (CoT) in multimodal embedding generation can produce redundant reasoning steps and semantic ambiguity, especially for retrieval use cases.
  • It introduces RIME (Rewrite-driven Multimodal Embedding), a framework that jointly optimizes generation and embeddings via a retrieval-friendly rewrite to make outputs more suitable for downstream retrieval.
  • The work proposes Cross-Mode Alignment (CMA) to connect generative and discriminative embedding spaces, allowing systems to balance efficiency and accuracy through flexible mutual retrieval.
  • It further presents Refine Reinforcement Learning (Refine-RL), which uses discriminative embeddings as stable semantic anchors to guide rewrite optimization.
  • Experiments on datasets including MMEB-V2, MRMR, and UVRB show RIME improves over prior generative embedding models while also substantially shortening the amount of “thinking” produced.

Abstract

Multimodal Large Language Models (MLLMs) have emerged as a promising foundation for universal multimodal embeddings. Recent studies have shown that reasoning-driven generative multimodal embeddings can outperform discriminative embeddings on several embedding tasks. However, Chain-of-Thought (CoT) reasoning tends to generate redundant thinking steps and introduce semantic ambiguity in the summarized answers in broader retrieval scenarios. To address this limitation, we propose Rewrite-driven Multimodal Embedding (RIME), a unified framework that jointly optimizes generation and embedding through a retrieval-friendly rewrite. Meanwhile, we present the Cross-Mode Alignment (CMA) to bridge the generative and discriminative embedding spaces, enabling flexible mutual retrieval to trade off efficiency and accuracy. Based on this, we also introduce Refine Reinforcement Learning (Refine-RL) that treats discriminative embeddings as stable semantic anchors to guide the rewrite optimization. Extensive experiments on MMEB-V2, MRMR and UVRB demonstrate that RIME substantially outperforms prior generative embedding models while significantly reducing the length of thinking.