Anchored Sliding Window: Toward Robust and Imperceptible Linguistic Steganography

arXiv cs.CL / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key weakness of language-model-based linguistic steganography: it is fragile to minor text alterations because prior methods assume the steganographic text is transmitted unchanged.
  • It introduces the anchored sliding window (ASW) framework, which anchors the prompt and an added “bridge” context within the model’s sliding window so the model can compensate for excluded tokens.
  • The authors model the bridge context optimization as a prompt-distillation variant and extend it with self-distillation strategies to improve training robustness.
  • Experiments indicate ASW consistently improves text quality, imperceptibility, and robustness versus a baseline approach across multiple settings.
  • The work is shared publicly with released code at the provided GitHub repository, enabling reproduction and further study.

Abstract

Linguistic steganography based on language models typically assumes that steganographic texts are transmitted without alteration, making them fragile to even minor modifications. While previous work mitigates this fragility by limiting the context window, it significantly compromises text quality. In this paper, we propose the anchored sliding window (ASW) framework to improve imperceptibility and robustness. In addition to the latest tokens, the prompt and a bridge context are anchored within the context window, encouraging the model to compensate for the excluded tokens. We formulate the optimization of the bridge context as a variant of prompt distillation, which we further extend using self-distillation strategies. Experiments show that our ASW significantly and consistently outperforms the baseline method in text quality, imperceptibility, and robustness across diverse settings. The code is available at github.com/ryehr/ASW_steganography.