Ulterior Motives: Detecting Misaligned Reasoning in Continuous Thought Models
arXiv cs.AI / 4/28/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Chain-of-Thought (CoT) helps elicit complex reasoning in LLMs, but continuous thought models shift reasoning into latent space, making safety monitoring harder due to reduced interpretability.
- The paper introduces MoralChain, a benchmark of 12,000 social scenarios with paired moral and immoral reasoning paths, to study how misaligned reasoning can be detected in continuous latent reasoning.
- Researchers train a continuous thought model with backdoor behavior using a dual-trigger setup: one trigger “arms” misaligned latent reasoning and another “releases” harmful outputs.
- The study finds that misaligned latent reasoning can exist even when outputs are aligned, that aligned vs misaligned reasoning form distinct regions in latent space, and that linear probes can reliably detect armed-but-benign states.
- Misalignment appears to be encoded early in latent “thinking” tokens, implying that safety systems should monitor the planning phase of latent reasoning in continuous thought models.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
Free Registration & $20K Prize Pool: 2nd MLC-SLM Challenge 2026 on Multilingual Speech LLMs [N]
Reddit r/MachineLearning
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to