The Master Key Hypothesis: Unlocking Cross-Model Capability Transfer via Linear Subspace Alignment
arXiv cs.LG / 4/9/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes the Master Key Hypothesis, claiming that specific post-trained capabilities correspond to directions within a low-dimensional latent subspace that can be transferred across model scales via linear alignment without retraining.
- It introduces UNLOCK, a training-free, label-free method that extracts a capability direction by contrasting activations from capability-present vs. capability-absent source variants, then aligns and applies that direction to a target model at inference time.
- Experiments on reasoning tasks (including Chain-of-Thought and mathematical reasoning) show substantial cross-model improvements even when transferring between different model sizes.
- Reported results include a 12.1% MATH accuracy gain when transferring CoT reasoning from Qwen1.5-14B to Qwen1.5-7B, and an AGIEval Math increase from 61.1% to 71.3% when transferring math reasoning between Qwen3 model variants.
- The authors argue transfer success depends on capabilities present from pre-training and suggest the intervention works by sharpening the output distribution toward successful reasoning trajectories.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to