SpecBound: Adaptive Bounded Self-Speculation with Layer-wise Confidence Calibration

arXiv cs.CL / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes SpecBound, a self-draft speculative decoding method for LLMs that preserves exact output equivalence while speeding up autoregressive inference without changing base model parameters.
  • It addresses self-draft failures where shallow layers are overconfident by using layer-wise temperature annealing in early-exit decisions to better calibrate confidence.
  • It further improves efficiency by adaptively bounding the speculation length using token-wise decoding difficulty, reducing redundant deeper-layer computation on hard tokens.
  • SpecBound reprocesses draft-token hidden states via a unified parallel pass through deeper layers, maintaining correctness while improving compute efficiency.
  • Experiments report up to 2.33x wall-time speedup versus standard decoding across diverse long-form generation tasks and multiple model architectures.

Abstract

Speculative decoding has emerged as a promising approach to accelerate autoregressive inference in large language models (LLMs). Self-draft methods, which leverage the base LLM itself for speculation, avoid the overhead of auxiliary draft models but face limitations: shallow layers often produce overconfident yet incorrect token predictions, and the presence of difficult tokens in a draft sequence forces redundant computation through deeper layers, undermining both draft acceptance and overall speedup. To address these issues, we propose a novel self-draft framework that suppresses spurious confidence via layer-wise temperature annealing in early-exit decision and adaptively bounds speculation length based on token-wise decoding difficulty. By reprocessing the hidden states of draft tokens in a unified parallel pass through deep layers, our method maintains exact output equivalence with the original model while maximizing computational efficiency. It requires no modifications to the base LLM parameters and achieves up to 2.33x wall-time speedup over standard autoregressive decoding across diverse long-form generation tasks and multiple model architectures.