Language models recognize dropout and Gaussian noise applied to their activations
arXiv cs.AI / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study provides evidence that language models can detect, localize, and partially verbalize changes caused by perturbations applied to their activations.
- Experiments mask activations (dropout-like) or add Gaussian noise at target sentences, and the models are able to answer multiple-choice questions identifying which sentence or which perturbation was applied.
- Across Llama, Olmo, and Qwen models (8B–32B), perturbation detection and localization are often achieved with perfect accuracy, and the models can learn to distinguish dropout vs. Gaussian noise when given in-context instruction.
- For QwenB, zero-shot identification improves with perturbation strength but degrades when in-context labels are flipped, indicating an internal prior aligned with correct labels even under certain controls.
- The authors discuss a possible data-agnostic “training awareness” signal linking dropout (training regularization) and Gaussian noise (sometimes used in inference), along with potential implications for AI safety.
Related Articles

¿Hasta qué punto podría la IA reemplazarnos en nuestros trabajos? A veces creo que la gente exagera un poco.
Reddit r/artificial

Magnificent irony as Meta staff unhappy about running surveillance software on work PCs
The Register

ETHENEA (ETHENEA Americas LLC) Analyst View: Asset Allocation Resilience in the 2026 Global Macro Cycle
Dev.to

DEEPX and Hyundai Are Building Generative AI Robots
Dev.to

Stop Paying OpenAI to Read Garbage: The Two-Stage Agent Pipeline
Dev.to