Why Mean Pooling Works: Quantifying Second-Order Collapse in Text Embeddings

arXiv cs.CL / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes mean pooling in text embedding models and argues it can cause “second-order collapse,” where information in higher-order (spatial/structural) statistics of token embeddings is lost.
  • It introduces a simple metric to quantify how much collapse mean pooling induces, and applies it to real models and datasets.
  • Empirical results show modern text encoders are generally robust to this second-order collapse, with contrastively fine-tuned encoders less prone to it than their pretrained backbones.
  • The study attributes the robustness to how tightly token embeddings concentrate within each text, and finds that lower measured collapse correlates with better downstream task performance.
  • Overall, the findings provide a new explanation for why effective text embeddings can still be produced using relatively coarse mean pooling.

Abstract

For constructing text embeddings, mean pooling, which averages token embeddings, is the standard approach. This paper examines whether mean pooling actually works well in real models. First, we note that mean pooling can collapse information beyond the first-order statistics of the token embeddings, such as second-order statistics that capture their spatial structure, potentially mapping distinct token embedding distributions to similar text embeddings. Motivated by this concern, we propose a simple metric to quantify such a collapse induced by mean pooling. Then, using this metric, we empirically measure how often this collapse occurs in actual models and texts, and find that modern text encoders are robust to this collapse. In particular, contrastive fine-tuned text encoders tend to be less prone to the collapse than their pretrained backbone models. We also find that the robustness of these text encoders lies in the concentration of token embeddings within each text. In addition, we find that robustness to the collapse, as quantified by our proposed metric, correlates with downstream task performance. Overall, our findings offer a new perspective on why modern text encoders remain effective despite relying on seemingly coarse mean pooling.