Language models recognize dropout and Gaussian noise applied to their activations

arXiv cs.AI / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study provides evidence that language models can detect, localize, and partially verbalize changes caused by perturbations applied to their activations.
  • Experiments mask activations (dropout-like) or add Gaussian noise at target sentences, and the models are able to answer multiple-choice questions identifying which sentence or which perturbation was applied.
  • Across Llama, Olmo, and Qwen models (8B–32B), perturbation detection and localization are often achieved with perfect accuracy, and the models can learn to distinguish dropout vs. Gaussian noise when given in-context instruction.
  • For QwenB, zero-shot identification improves with perturbation strength but degrades when in-context labels are flipped, indicating an internal prior aligned with correct labels even under certain controls.
  • The authors discuss a possible data-agnostic “training awareness” signal linking dropout (training regularization) and Gaussian noise (sometimes used in inference), along with potential implications for AI safety.

Abstract

We provide evidence that language models can detect, localize and, to a certain degree, verbalize the difference between perturbations applied to their activations. More precisely, we either (a) \emph{mask} activations, simulating \emph{dropout}, or (b) add \emph{Gaussian noise} to them, at a target sentence. We then ask a multiple-choice question such as ``\emph{Which of the previous sentences was perturbed?}'' or ``\emph{Which of the two perturbations was applied?}''. We test models from the Llama, Olmo, and Qwen families, with sizes between 8B and 32B, all of which can easily detect and localize the perturbations, often with perfect accuracy. These models can also learn, when taught in context, to distinguish between dropout and Gaussian noise. Notably, \qwenb's \emph{zero-shot} accuracy in identifying which perturbation was applied improves as a function of the perturbation strength and, moreover, decreases if the in-context labels are flipped, suggesting a prior for the correct ones -- even modulo controls. Because dropout has been used as a training-regularization technique, while Gaussian noise is sometimes added during inference, we discuss the possibility of a data-agnostic ``training awareness'' signal and the implications for AI safety. The code and data are available at \href{https://github.com/saifh-github/llm-dropout-noise-recognition}{link 1} and \href{https://drive.google.com/file/d/1es-Sfw_AH9GficeXgeqpy87rocrZZ_PQ/view}{link 2}, respectively.