AI Navigate

A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring

arXiv cs.CL / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The article argues that large language models can exhibit steganographic capabilities that may enable evasion of oversight mechanisms.
  • It notes that classical steganography definitions and detection methods require a known reference distribution, which is infeasible for LLM steganography.
  • It introduces a decision-theoretic framework based on generalized V-information to quantify usable information and defines a steganographic gap that compares downstream utility for decodable vs. non-decodable content.
  • It empirically validates the formalism and demonstrates how it can be used to detect, quantify, and mitigate steganographic reasoning in LLMs.

Abstract

Large language models are beginning to show steganographic capabilities. Such capabilities could allow misaligned models to evade oversight mechanisms. Yet principled methods to detect and quantify such behaviours are lacking. Classical definitions of steganography, and detection methods based on them, require a known reference distribution of non-steganographic signals. For the case of steganographic reasoning in LLMs, knowing such a reference distribution is not feasible; this renders these approaches inapplicable. We propose an alternative, \textbf{decision-theoretic view of steganography}. Our central insight is that steganography creates an asymmetry in usable information between agents who can and cannot decode the hidden content (present within a steganographic signal), and this otherwise latent asymmetry can be inferred from the agents' observable actions. To formalise this perspective, we introduce generalised \mathcal{V}-information: a utilitarian framework for measuring the amount of usable information within some input. We use this to define the \textbf{steganographic gap} -- a measure that quantifies steganography by comparing the downstream utility of the steganographic signal to agents that can and cannot decode the hidden content. We empirically validate our formalism, and show that it can be used to detect, quantify, and mitigate steganographic reasoning in LLMs.