AI Navigate

SEM: Sparse Embedding Modulation for Post-Hoc Debiasing of Vision-Language Models

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Sparse Embedding Modulation (SEM), a post-hoc, zero-shot debiasing framework for vision-language models like CLIP that operates in a Sparse Autoencoder latent space.
  • SEM disentangles bias- and query-relevant features by decomposing CLIP text embeddings into sparse components and modulating bias-relevant neurons.
  • The approach enables non-linear debiasing interventions and demonstrates substantial fairness gains in retrieval and zero-shot classification across four benchmark datasets and two CLIP backbones.
  • Overall, the results indicate that sparse latent representations can provide an effective foundation for debiasing vision-language models without sacrificing semantic fidelity.

Abstract

Models that bridge vision and language, such as CLIP, are key components of multimodal AI, yet their large-scale, uncurated training data introduce severe social and spurious biases. Existing post-hoc debiasing methods often operate directly in the dense CLIP embedding space, where bias and task-relevant information are highly entangled. This entanglement limits their ability to remove bias without degrading semantic fidelity. In this work, we propose Sparse Embedding Modulation (SEM), a post-hoc, zero-shot debiasing framework that operates in a Sparse Autoencoder (SAE) latent space. By decomposing CLIP text embeddings into disentangled features, SEM identifies and modulates bias-relevant neurons while preserving query-relevant ones. This enables more precise, non-linear interventions. Across four benchmark datasets and two CLIP backbones, SEM achieves substantial fairness gains in retrieval and zero-shot classification. Our results demonstrate that sparse latent representations provide an effective foundation for post-hoc debiasing of vision-language models.