ToolSpec: Accelerating Tool Calling via Schema-Aware and Retrieval-Augmented Speculative Decoding

arXiv cs.CL / 4/16/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper analyzes LLM tool-calling traces and finds they follow constrained, schema-like structures with recurring invocation patterns.
  • It introduces ToolSpec, a schema-aware and retrieval-augmented speculative decoding method that uses tool schemas plus a finite-state mechanism to alternate between deterministic token filling and speculative generation.
  • ToolSpec further accelerates decoding by retrieving similar historical tool invocations and reusing them as drafts, reducing the work needed to predict tool-call sequences.
  • Experiments on multiple benchmarks show ToolSpec delivers up to a 4.2× speedup and outperforms prior training-free speculative decoding approaches for tool calling.
  • ToolSpec is designed as a plug-and-play component that can be integrated into existing LLM serving and workflow pipelines to address latency in multi-step tool interactions.

Abstract

Tool calling has greatly expanded the practical utility of large language models (LLMs) by enabling them to interact with external applications. As LLM capabilities advance, effective tool use increasingly involves multi-step, multi-turn interactions to solve complex tasks. However, the resulting growth in tool interactions incurs substantial latency, posing a key challenge for real-time LLM serving. Through empirical analysis, we find that tool-calling traces are highly structured, conform to constrained schemas, and often exhibit recurring invocation patterns. Motivated by this, we propose ToolSpec, a schema-aware, retrieval-augmented speculative decoding method for accelerating tool calling. ToolSpec exploits predefined tool schemas to generate accurate drafts, using a finite-state machine to alternate between deterministic schema token filling and speculative generation for variable fields. In addition, ToolSpec retrieves similar historical tool invocations and reuses them as drafts to further improve efficiency. ToolSpec presents a plug-and-play solution that can be seamlessly integrated into existing LLM workflows. Experiments across multiple benchmarks demonstrate that ToolSpec achieves up to a 4.2x speedup, substantially outperforming existing training-free speculative decoding methods.