AI Navigate

RelayCaching: Accelerating LLM Collaboration via Decoding KV Cache Reuse

arXiv cs.LG / 3/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • RelayCaching is a training-free inference method that reuses decoding-phase KV caches from earlier agents to speed up subsequent prefill phases in multi-agent LLM collaboration.
  • The approach relies on the finding that KV caches for identical content are highly consistent across phases, while prefix-induced deviations are sparse and localized to a subset of layers and token positions.
  • By selectively recomputing KV caches only at the deviated positions, RelayCaching preserves model accuracy with minimal overhead and improves the accuracy-efficiency trade-off.
  • Experiments on mathematical reasoning, general knowledge, and code generation tasks show over 80% KV cache reuse and up to 4.7x reduction in time-to-first-token compared with the standard pipeline, with negligible accuracy degradation.
  • This technique addresses the KV cache memory usage and TTFT bottlenecks in collaborative LLM systems, enabling more scalable multi-agent AI deployments.

Abstract

The increasing complexity of AI tasks has shifted the paradigm from monolithic models toward multi-agent large language model (LLM) systems. However, these collaborative architectures introduce a critical bottleneck: redundant prefill computation for shared content generated by previous agents, which significantly increases KV cache memory usage and time-to-first-token (TTFT). While various KV cache methods have been proposed to mitigate prefill redundancy, they either fail to maintain accuracy on agent-generated outputs or exhibit low reuse rates due to rigid constraints. We present RelayCaching, a training-free inference method that directly reuses decoding phase KV caches from previous agents in subsequent prefill phases. Our key insight is that KV caches for identical content are highly consistent across phases, while prefix-induced deviations are sparse and localized within a limited range of layers and token positions. By selectively recomputing KV caches at these positions, RelayCaching preserves model accuracy with minimal overhead, yielding a superior accuracy-efficiency trade-off over existing methods. Experiments on diverse collaborative LLM tasks spanning mathematical reasoning, general knowledge, and code generation demonstrate that RelayCaching achieves over 80% KV cache reuse, reduces TTFT by up to 4.7\times compared to the standard pipeline, all with negligible accuracy degradation.