Align then Train: Efficient Retrieval Adapter Learning

arXiv cs.CL / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Efficient Retrieval Adapter (ERA) to address a common dense-retrieval mismatch where complex, instruction-like queries require strong reasoning while documents remain simpler and static.
  • ERA avoids expensive fine-tuning of large embedding models by training retrieval adapters in two stages: self-supervised alignment between a large query embedder and a lightweight document embedder, followed by supervised adaptation using limited labeled data.
  • The method bridges both the representation gap (between different embedding models) and the semantic gap (between complex queries and simpler documents) without requiring corpus re-indexing.
  • Experiments on the MAIR benchmark (126 retrieval tasks across 6 domains) show ERA improves retrieval under low-label regimes and can outperform approaches that depend on larger labeled datasets.
  • ERA also demonstrates that it can combine strong query embedders with weaker document embedders effectively across domains, suggesting practical efficiency gains in retrieval system design.

Abstract

Dense retrieval systems increasingly need to handle complex queries. In many realistic settings, users express intent through long instructions or task-specific descriptions, while target documents remain relatively simple and static. This asymmetry creates a retrieval mismatch: understanding queries may require strong reasoning and instruction-following, whereas efficient document indexing favors lightweight encoders. Existing retrieval systems often address this mismatch by directly improving the embedding model, but fine-tuning large embedding models to better follow such instructions is computationally expensive, memory-intensive, and operationally burdensome. To address this challenge, we propose Efficient Retrieval Adapter (ERA), a label-efficient framework that trains retrieval adapters in two stages: self-supervised alignment and supervised adaptation. Inspired by the pre-training and supervised fine-tuning stages of LLMs, ERA first aligns the embedding spaces of a large query embedder and a lightweight document embedder, and then uses limited labeled data to adapt the query-side representation, bridging both the representation gap between embedding models and the semantic gap between complex queries and simple documents without re-indexing the corpus. Experiments on the MAIR benchmark, spanning 126 retrieval tasks across 6 domains, show that ERA improves retrieval in low-label settings, outperforms methods that rely on larger amounts of labeled data, and effectively combines stronger query embedders with weaker document embedders across domains.