AI Navigate

LLM2Vec-Gen: Generative Embeddings from Large Language Models

arXiv cs.CL / 3/12/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • LLM2Vec-Gen presents a self-supervised approach to generate embeddings by learning to represent the model's potential response rather than directly encoding the input.
  • It achieves this by adding trainable special tokens to the LLM's vocabulary, appending them to inputs, and optimizing them to encode the LLM's response while keeping the backbone frozen.
  • Training uses the LLM's own completion as guidance along with an unsupervised embedding teacher that provides distillation targets, enabling learning from unlabeled queries.
  • The method attains state-of-the-art self-supervised performance on MTEB (9.3% improvement over the best unsupervised embedding teacher), reduces harmful content retrieval by up to 43.2%, improves reasoning by about 29.3%, and yields interpretable embeddings that can be decoded back into text.

Abstract

LLM-based text embedders typically encode the semantic content of their input. However, embedding tasks require mapping diverse inputs to similar outputs. Typically, this input-output is addressed by training embedding models with paired data using contrastive learning. In this work, we propose a novel self-supervised approach, LLM2Vec-Gen, which adopts a different paradigm: rather than encoding the input, we learn to represent the model's potential response. Specifically, we add trainable special tokens to the LLM's vocabulary, append them to input, and optimize them to represent the LLM's response in a fixed-length sequence. Training is guided by the LLM's own completion for the query, along with an unsupervised embedding teacher that provides distillation targets. This formulation helps to bridge the input-output gap and transfers LLM capabilities such as safety alignment and reasoning to embedding tasks. Crucially, the LLM backbone remains frozen and training requires only unlabeled queries. LLM2Vec-Gen achieves state-of-the-art self-supervised performance on the Massive Text Embedding Benchmark (MTEB), improving by 9.3% over the best unsupervised embedding teacher. We also observe up to 43.2% reduction in harmful content retrieval and 29.3% improvement in reasoning capabilities for embedding tasks. Finally, the learned embeddings are interpretable and can be decoded into text to reveal their semantic content.