QCFuse: Query-Centric Cache Fusion for Efficient RAG Inference

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Cache fusion techniques can speed up RAG-augmented LLM generation by reusing KV cache and selectively recomputing tokens, but prior approaches often lack global awareness of the user query when choosing what to recompute.
  • QCFuse is proposed as a query-centric KV cache fusion system that uses semantic summary anchors to build more accurate query representations without incurring prohibitive overhead.
  • It selectively recomputes tokens tied to the user query and updates tokens according to the attention distribution from the most critical Transformer layer to keep the computation pipeline efficient.
  • Experiments on real-world datasets show about a 40% improvement in response efficiency while maintaining equivalent accuracy versus existing methods.
  • In some cases, QCFuse also provides an attention denoising effect that can further improve response accuracy, suggesting additional inference optimization potential.

Abstract

Cache fusion accelerates generation process of LLMs equipped with RAG through KV caching and selective token recomputation, thereby reducing computational costs and improving efficiency. However, existing methods primarily rely on local perspectives for token selection and lack global awareness from the user query. Utilizing this global awareness is challenging due to the high cost of obtaining context-aware query representations and the strict pipeline constraints required for efficient attention analysis. Thus, this demonstration introduces QCFuse, an innovative KV cache fusion system centered on the user query. QCFuse leverages semantic summary anchors to enhance query representations and selectively recomputes query-related tokens to improve accuracy, updating tokens based on the attention distribution of the most critical Transformer layer to preserve the high efficiency of the pipeline structure. Evaluations on real-world datasets demonstrate that QCFuse significantly improves the response efficiency of LLMs by 40\% while maintaining equivalent accuracy compared to current methods. Additionally, in certain scenarios, QCFuse achieves an attention denoising effect that yields higher response accuracy, demonstrating substantial potential in the optimization of LLM inference.