Multi-Drafter Speculative Decoding with Alignment Feedback

arXiv cs.CL / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Speculative decoding speeds up LLM inference by having a smaller model draft candidate tokens that the larger target model verifies to maintain output quality.
  • The paper argues that single drafters, especially those tuned to specific tasks/domains, do not generalize well to diverse applications.
  • It proposes MetaSD, a unified speculative-decoding framework that combines multiple heterogeneous drafters in one pipeline.
  • MetaSD uses alignment feedback and formulates drafter selection as a multi-armed bandit to dynamically allocate compute to the most effective drafters.
  • Experiments reported in the study show MetaSD consistently outperforms single-drafter speculative decoding methods.

Abstract

Speculative decoding (SD) accelerates large language model (LLM) inference by using a smaller model to draft future tokens, which are then verified by the target LLM. This preserves generation quality by accepting only aligned tokens. However, individual drafters, often trained for specific tasks or domains, exhibit limited effectiveness across diverse applications. To address this, we introduce \textsc{MetaSD}, a unified framework that integrates multiple drafters into the SD process. MetaSD dynamically allocates computational resources to heterogeneous drafters by leveraging alignment feedback and framing drafter selection as a multi-armed bandit problem. Extensive experiments show MetaSD consistently outperforms single-drafter approaches.