From Tokens to Steps: Verification-Aware Speculative Decoding for Efficient Multi-Step Reasoning

arXiv cs.CL / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes SpecGuard, a verification-aware speculative decoding framework that improves multi-step reasoning by preventing incorrect steps from propagating.
  • Instead of relying on external reward models, SpecGuard performs step-level validation using only model-internal signals.
  • It samples multiple draft candidates at each step, selects the most consistent step, and then decides whether to accept or recompute it using two lightweight internal scores (attention-based grounding and token log-probability confidence).
  • Experiments on multiple reasoning benchmarks report a 3.6% accuracy improvement and about 11% lower latency compared with standard speculative decoding and reward-guided approaches.
  • Overall, SpecGuard targets more efficient and more generalizable inference by allocating computation selectively based on verification signals.

Abstract

Speculative decoding (SD) accelerates large language model inference by allowing a lightweight draft model to propose outputs that a stronger target model verifies. However, its token-centric nature allows erroneous steps to propagate. Prior approaches mitigate this using external reward models, but incur additional latency, computational overhead, and limit generalizability. We propose SpecGuard, a verification-aware speculative decoding framework that performs step-level verification using only model-internal signals. At each step, SpecGuard samples multiple draft candidates and selects the most consistent step, which is then validated using an ensemble of two lightweight model-internal signals: (i) an attention-based grounding score that measures attribution to the input and previously accepted steps, and (ii) a log-probability-based score that captures token-level confidence. These signals jointly determine whether a step is accepted or recomputed using the target, allocating compute selectively. Experiments across a range of reasoning benchmarks show that SpecGuard improves accuracy by 3.6% while reducing latency by ~11%, outperforming both SD and reward-guided SD.