Sustained Gradient Alignment Mediates Subliminal Learning in a Multi-Step Setting: Evidence from MNIST Auxiliary Logit Distillation Experiment
arXiv cs.LG / 4/29/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper reports MNIST auxiliary logit distillation can trigger “subliminal learning,” where a student acquires an unintended teacher trait even when distillation uses only no-class logits.
- Prior single-step gradient descent theory suggests the effect comes from alignment between the unintended trait direction and the distillation gradients, but it was unclear whether that alignment would hold across multiple optimization steps.
- The authors empirically find that gradient alignment remains weakly but consistently positive throughout multi-step training and causally contributes to the acquisition of the unintended trait.
- A proposed mitigation approach, “liminal training,” attenuates the gradient alignment but does not prevent trait acquisition in this experimental setup.
- The results imply that mitigation methods relying on suppressing alignment may be unreliable in regimes where the first-order (dominant) gradient drive drives the phenomenon.
Related Articles

How I Use AI Agents to Maintain a Living Knowledge Base for My Team
Dev.to
IK_LLAMA now supports Qwen3.5 MTP Support :O
Reddit r/LocalLLaMA
OpenAI models, Codex, and Managed Agents come to AWS
Dev.to

Indian Developers: How to Build AI Side Income with $0 Capital in 2026
Dev.to

Vertical SaaS for Startups 2026: Building a Niche AI-First Product
Dev.to