Sigmoid Head for Quality Estimation under Language Ambiguity

arXiv cs.CL / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • This paper argues that standard language-model probabilities (from the softmax head) are unreliable for quality estimation because ambiguous language can yield multiple valid outputs while softmax forces probability mass into a single best option.
  • It identifies two core causes: softmax’s inability to assign high probabilities to multiple correct options simultaneously, and training targets being one-hot references that implicitly assume only one correct token.
  • The authors introduce a trainable “Sigmoid Head,” an additional unembedding head with sigmoid activation, designed to produce a better quality signal under ambiguity and to remain computationally efficient.
  • To address training-target single-reference limitations, they use a negative-sampling heuristic that avoids likely alternative-correct tokens during Sigmoid Head training.
  • The paper claims the Sigmoid Head provides a notably improved quality estimate signal versus the original softmax head and is more robust in out-of-domain settings because it does not require human-annotated quality data.

Abstract

Language model (LM) probability is not a reliable quality estimator, as natural language is ambiguous. When multiple output options are valid, the model's probability distribution is spread across them, which can misleadingly indicate low output quality. This issue is caused by two reasons: (1) LMs' final output activation is softmax, which does not allow multiple correct options to receive high probabilities simultaneuously and (2) LMs' training data is single, one-hot encoded references, indicating that there is only one correct option at each output step. We propose training a module for Quality Estimation on top of pre-trained LMs to address these limitations. The module, called Sigmoid Head, is an extra unembedding head with sigmoid activation to tackle the first limitation. To tackle the second limitation, during the negative sampling process to train the Sigmoid Head, we use a heuristic to avoid selecting potentially alternative correct tokens. Our Sigmoid Head is computationally efficient during training and inference. The probability from Sigmoid Head is notably better quality signal compared to the original softmax head. As the Sigmoid Head does not rely on human-annotated quality data, it is more robust to out-of-domain settings compared to supervised QE.