AI Navigate

Enhancing Lexicon-Based Text Embeddings with Large Language Models

arXiv cs.CL / 3/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes LENS, a lexicon-based embedding method that uses LLM token embeddings clustered by vocabulary to reduce redundancy and create compact representations.
  • It investigates bidirectional attention and pooling strategies to improve the alignment between tokens and the resulting embeddings.
  • LENS achieves competitive performance with dense embeddings on the MTEB benchmark and enables efficient dimension pruning without specialized objectives, while combining with dense embeddings yields state-of-the-art results on BEIR.
  • The findings suggest lexicon-based embeddings with LLMs can complement dense methods, potentially lowering storage needs and enabling scalable retrieval systems.

Abstract

Recent large language models (LLMs) have demonstrated exceptional performance on general-purpose text embedding tasks. While dense embeddings have dominated related research, we introduce the first lexicon-based embeddings (LENS) leveraging LLMs that achieve competitive performance on these tasks. LENS consolidates the vocabulary space through token embedding clustering to handle the issue of token redundancy in LLM vocabularies. To further improve performance, we investigate bidirectional attention and various pooling strategies. Specifically, LENS simplifies lexical matching with redundant vocabularies by assigning each dimension to a specific token cluster, where semantically similar tokens are grouped together. Extensive experiments demonstrate that LENS outperforms dense embeddings on the Massive Text Embedding Benchmark (MTEB), delivering compact representations with dimensionality comparable to dense counterparts. Furthermore, LENS inherently supports efficient embedding dimension pruning without any specialized objectives like Matryoshka Representation Learning. Notably, combining LENS with dense embeddings achieves state-of-the-art performance on the retrieval subset of MTEB (i.e., BEIR).