SSG: Logit-Balanced Vocabulary Partitioning for LLM Watermarking

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how LLM watermarking schemes like KGW can lose effectiveness in low-entropy generation tasks such as code generation and mathematical reasoning.
  • It identifies that the “watermark strength” is governed by the next-token probability distribution, which limits how much token selection can be modified under random vocabulary partitioning.
  • The authors propose SSG (Sort-then-Split by Groups), which partitions the vocabulary into two logit-balanced subsets to increase the per-token lower bound of watermark strength.
  • Experiments on code and math reasoning datasets show that SSG improves watermark detectability compared with prior KGW-style partitioning approaches.

Abstract

Watermarking has emerged as a promising technique for tracing the authorship of content generated by large language models (LLMs). Among existing approaches, the KGW scheme is particularly attractive due to its versatility, efficiency, and effectiveness in natural language generation. However, KGW's effectiveness degrades significantly under low-entropy settings such as code generation and mathematical reasoning. A crucial step in the KGW method is random vocabulary partitioning, which enables adjustments to token selection based on specific preferences. Our study revealed that the next-token probability distribution plays an critical role in determining how much, or even whether, we can modify token selection and, consequently, the effectiveness of watermarking. We refer to this characteristic, associated with the probability distribution of each token prediction, as \emph{watermark strength.} In cases of random vocabulary partitioning, the lower bound of watermark strength is dictated by the next-token probability distribution. However, we found that, by redesigning the vocabulary partitioning algorithm, we can potentially raise this lower bound. In this paper, we propose SSG (\textbf{S}ort-then-\textbf{S}plit by \textbf{G}roups), a method that partitions the vocabulary into two logit-balanced subsets. This design lifts the lower bound of watermark strength for each token prediction, thereby improving watermark detectability. Experiments on code generation and mathematical reasoning datasets demonstrate the effectiveness of SSG.

SSG: Logit-Balanced Vocabulary Partitioning for LLM Watermarking | AI Navigate