From Tokens to Concepts: Leveraging SAE for SPLADE

arXiv cs.CL / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes SAE-SPLADE, a Learned Sparse IR approach that replaces SPLADE’s backbone vocabulary with a latent space of semantic concepts learned via Sparse Auto-Encoders (SAE).
  • It investigates how well the SAE-learned concepts align/fit with the SPLADE framework, including compatibility and appropriate training approaches.
  • The authors analyze key differences between SAE-SPLADE and traditional SPLADE, focusing on limitations related to polysemy and synonymy that can hurt performance, especially for multilingual and multimodal settings.
  • Experiments show SAE-SPLADE achieves retrieval performance comparable to SPLADE on both in-domain and out-of-domain tasks, while also improving efficiency.

Abstract

Learned Sparse IR models, such as SPLADE, offer an excellent efficiency-effectiveness tradeoff. However, they rely on the underlying backbone vocabulary, which might hinder performance (polysemicity and synonymy) and pose a challenge for multi-lingual and multi-modal usages. To solve this limitation, we propose to replace the backbone vocabulary with a latent space of semantic concepts learned using Sparse Auto-Encoders (SAE). Throughout this paper, we study the compatibility of these 2 concepts, explore training approaches, and analyze the differences between our SAE-SPLADE model and traditional SPLADE models. Our experiments demonstrate that SAE-SPLADE achieves retrieval performance comparable to SPLADE on both in-domain and out-of-domain tasks while offering improved efficiency.