Path-Lock Expert: Separating Reasoning Mode in Hybrid Thinking via Architecture-Level Separation

arXiv cs.CL / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Hybrid-thinking language models with think/no-think modes still suffer from “reasoning leakage,” because both modes are effectively encoded in the same feed-forward parameters.
  • The paper proposes Path-Lock Expert (PLE), which replaces each decoder-layer MLP with two mode-specific “experts” (think vs. no-think) while keeping attention, embeddings, normalization, and the LM head shared.
  • A deterministic control-token router selects exactly one expert path for the entire sequence, ensuring mode-pure updates during supervised fine-tuning and preserving the dense model’s computation pattern.
  • Experiments on math and science reasoning benchmarks show PLE keeps strong think performance and significantly improves the no-think mode’s accuracy and conciseness while reducing leakage.
  • Reported results on Qwen3-4B (e.g., AIME24) reduce no-think reflective tokens from 2.54 to 0.39 and raise no-think accuracy from 20.67% to 40.00% without degrading think-mode performance.

Abstract

Hybrid-thinking language models expose explicit think and no-think modes, but current designs do not separate them cleanly. Even in no-think mode, models often emit long and self-reflective responses, causing reasoning leakage. Existing work reduces this issue through better data curation and multi-stage training, yet leakage remains because both modes are still encoded in the same feed-forward parameters. We propose Path-Lock Expert (PLE), an architecture-level solution that replaces the single MLP in each decoder layer with two semantically locked experts, one for think and one for no-think, while keeping attention, embeddings, normalization, and the language-model head shared. A deterministic control-token router selects exactly one expert path for the entire sequence, so inference preserves the dense model's per-token computation pattern and each expert receives mode-pure updates during supervised fine-tuning. Across math and science reasoning benchmarks, PLE maintains strong think performance while producing a substantially stronger no-think mode that is more accurate, more concise, and far less prone to reasoning leakage. On Qwen3-4B, for example, PLE reduces no-think reflective tokens on AIME24 from 2.54 to 0.39 and improves no-think accuracy from 20.67% to 40.00%, all while preserving think-mode performance. These results suggest that controllable hybrid thinking is fundamentally an architectural problem, and separating mode-specific feed-forward pathways is a simple and effective solution.