On the Rejection Criterion for Proxy-based Test-time Alignment

arXiv cs.CL / 4/20/2026

📰 NewsModels & Research

Key Points

  • The paper analyzes two proxy-based test-time alignment methods—implicit reward and nudging—and shows they can be understood as sampling from closely related graphical models that mainly differ in how they decide on rejection.
  • It argues that using the large model’s “confidence” as the rejection criterion is poorly motivated, citing linguistic issues such as ambiguous phrasing.
  • The authors propose a new rejection criterion based on a more conservative “confidence bet” to better govern when the small aligned proxy should influence token generation.
  • Experiments indicate that this new rejection criterion improves performance over prior approaches across multiple datasets.

Abstract

Recent works proposed test-time alignment methods that rely on a small aligned model as a proxy that guides the generation of a larger base (unaligned) model. The implicit reward approach skews the large model distribution, whereas the nudging approach defers the generation of the next token to the small aligned model when the large base one is unconfident about its outcome. In this work, we first show that both approaches can be reduced to sampling from similar graphical models, where they differ only in the definition of a rejection criterion (or distribution). Moreover, we argue that the confidence criterion is ill-motivated due to linguistic phenomena like ambiguous phrasing. We propose a novel rejection criterion based on a conservative confidence bet. Experimentally, our novel approach outperforms previous work on several datasets.