TARo: Token-level Adaptive Routing for LLM Test-time Alignment
arXiv cs.CL / 3/20/2026
📰 NewsModels & Research
Key Points
- TARo introduces a token-level adaptive router that steers frozen LLMs toward structured reasoning entirely at inference time by guiding a reward model trained on step-wise mathematical traces.
- The method trains reward models to capture fine-grained logical consistency signals and uses a learnable token-level router to control how the reward model guides the base model.
- Experiments show TARo improves reasoning performance by up to 22.4% over the base model and 8.4% over existing token-level test-time alignment methods, and it generalizes from small to large backbones without retraining.
- TARo also boosts out-of-distribution clinical reasoning (MedXpertQA) and instruction following (AlpacaEval), extending test-time alignment from preference optimization to robust, cross-domain reasoning.
Related Articles

Interactive Web Visualization of GPT-2
Reddit r/artificial
[R] Causal self-attention as a probabilistic model over embeddings
Reddit r/MachineLearning
The 5 software development trends that actually matter in 2026 (and what they mean for your startup)
Dev.to
iPhone 17 Pro Running a 400B LLM: What It Really Means
Dev.to
[R] V-JEPA 2 has no pixel decoder, so how do you inspect what it learned? We attached a VQ probe to the frozen encoder and found statistically significant physical structure
Reddit r/artificial