Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning via attribution of unsafe behavior to base model
arXiv cs.LG / 4/14/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that refusal training and earlier “deliberative alignment” approaches can be shallow, leaving an alignment gap between a stronger teacher model and a student model that impacts both safety and general usefulness.
- It finds that even after students learn reasoning patterns via deliberative alignment, they can still retain unsafe behaviors from the underlying base model.
- To address this, the authors propose a BoN (sampling) method that explicitly attributes unsafe behavior back to the base LLM in latent space and down-ranks unsafe responses.
- Experiments across 7 teacher models and 6 student models report substantial reductions in attack success rates on multiple safety benchmarks (e.g., ~28.2% in DAN, ~31.3% in WildJailbreak, and ~35.4% in StrongREJECT).
- The study shows these safety improvements persist after RL training, emphasizing ongoing uncertainty in how “safe reasoning” transfers and the importance of tracing unsafe behavior sources.
Related Articles

Don't forget, there is more than forgetting: new metrics for Continual Learning
Dev.to

Microsoft MAI-Image-2-Efficient Review 2026: The AI Image Model Built for Production Scale
Dev.to
Bit of a strange question?
Reddit r/artificial

One URL for Your AI Agent: HTML, JSON, Markdown, and an A2A Card
Dev.to

One URL for Your AI Agent: HTML, JSON, Markdown, and an A2A Card
Dev.to