RACER: Retrieval-Augmented Contextual Rapid Speculative Decoding

arXiv cs.CL / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces RACER, a training-free speculative decoding method designed to reduce LLM inference latency by generating faster “drafts” via guess-and-verify.
  • RACER combines retrieval of exact contextual patterns (for reliable anchors) with logits-based future cues (for flexible extrapolation), aiming to address weaknesses in prior retrieval-only and logits-only training-free approaches.
  • Experiments on Spec-Bench, HumanEval, and MGSM-ZH show RACER achieves over 2× speedup versus standard autoregressive decoding.
  • RACER also outperforms earlier training-free speculative decoding methods and is positioned as a scalable, plug-and-play technique, with code released on GitHub.

Abstract

Autoregressive decoding in Large Language Models (LLMs) generates one token per step, causing high inference latency. Speculative decoding (SD) mitigates this through a guess-and-verify strategy, but existing training-free variants face trade-offs: retrieval-based drafts break when no exact match exists, while logits-based drafts lack structural guidance. We propose \textbf{RACER} (\textbf{R}etrieval-\textbf{A}ugmented \textbf{C}ont\textbf{e}xtual \textbf{R}apid Speculative Decoding), a lightweight and training-free method that integrates retrieved exact patterns with logit-driven future cues. This unification supplies both reliable anchors and flexible extrapolation, yielding richer speculative drafts. Experiments on Spec-Bench, HumanEval, and MGSM-ZH demonstrate that RACER consistently accelerates inference, achieving more than 2\times speedup over autoregressive decoding, and outperforms prior training-free methods, offering a scalable, plug-and-play solution for efficient LLM decoding. Our source code is available at \href{https://github.com/hkr04/RACER}{https://github.com/hkr04/RACER}.