AI Navigate

TARo: Token-level Adaptive Routing for LLM Test-time Alignment

arXiv cs.CL / 3/20/2026

📰 NewsModels & Research

Key Points

  • TARo introduces a token-level adaptive router that steers frozen LLMs toward structured reasoning entirely at inference time by guiding a reward model trained on step-wise mathematical traces.
  • The method trains reward models to capture fine-grained logical consistency signals and uses a learnable token-level router to control how the reward model guides the base model.
  • Experiments show TARo improves reasoning performance by up to 22.4% over the base model and 8.4% over existing token-level test-time alignment methods, and it generalizes from small to large backbones without retraining.
  • TARo also boosts out-of-distribution clinical reasoning (MedXpertQA) and instruction following (AlpacaEval), extending test-time alignment from preference optimization to robust, cross-domain reasoning.

Abstract

Large language models (LLMs) exhibit strong reasoning capabilities but typically require expensive post-training to reach high performance. Recent test-time alignment methods offer a lightweight alternative, but have been explored mainly for preference alignment rather than reasoning. To bridge this gap, we propose, Token-level Adaptive Routing (TARo), which steers frozen LLMs toward structured reasoning entirely at inference time. Specifically, we first train reward models on step-wise mathematical traces to capture fine-grained logical consistency signals, then introduce a learnable token-level router that automatically controls the guidance of the reward model to the base model. Extensive experiments show that TARo significantly improves reasoning performance by up to +22.4% over base model and +8.4% over existing token-level test-time alignment methods, while also boosting out-of-distribution clinical reasoning (MedXpertQA) and instruction following (AlpacaEval). Furthermore, TARo also generalizes from small to large backbones without retraining, extending test-time alignment from preference optimization to robust, cross-domain reasoning.