AI Navigate

ICaRus: Identical Cache Reuse for Efficient Multi Model Inference

arXiv cs.LG / 3/17/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • ICaRus proposes Identical Cache Reuse to allow multiple models to share an identical KV cache across all layers, dramatically reducing memory usage in multi-model inference.
  • The method conceptualizes a decoder-only Transformer as a logical encoder that generates KV caches and a logical decoder that produces tokens from those caches, enabling the encoder to be frozen while training only the decoder.
  • By freezing the encoder and using lightweight adapters like LoRA, ICaRus enables cross-model cache sharing and parallel KV cache generation with next-token prediction to cut recomputation.
  • In experiments with eight models, ICaRus achieves up to 11.1x lower P95 latency and 3.8x higher throughput while maintaining comparable accuracy to task-specific fine-tuned baselines.
  • The approach eliminates cache memory explosion and evictions in multi-model systems, offering scalable efficiency gains for agentic AI workflows.

Abstract

Multi model inference has recently emerged as a prominent paradigm, particularly in the development of agentic AI systems. However, in such scenarios, each model must maintain its own Key-Value (KV) cache for the identical prompt, leading to substantial memory consumption. This explosive growth of KV caches forces LLM serving systems to evict previously stored caches, which in turn introduces significant recomputation overhead whenever the evicted caches are required again. Moreover, prefix caching is inherently infeasible across different models, forcing each model to recompute KV cache for the identical prompt, which leads to significant overhead. To alleviate these issues, we propose Identical Cache Reuse (ICaRus), a novel architecture that allows multiple models to share identical KV caches across all layers. ICaRus is based on the key observation that a decoder-only Transformer can be conceptually decomposed into a logical encoder, which generates KV caches, and a logical decoder, which predicts output tokens from the KV caches. ICaRus fine-tunes only the logical decoder while freezing the logical encoder, enabling multiple models to share an identical KV cache. This eliminates cache memory explosion and unexpected evictions while also allowing cross-model reuse of KV caches for new input tokens, thereby removing redundant recomputation in multi model inference achieving both efficiency and scalability. Moreover, by incorporating lightweight adapters such as LoRA, ICaRus parallelizes KV cache generation and next-token prediction during decoding. ICaRus achieves comparable accuracy to task-specific fine-tuned model across a diverse set of tasks, while allowing multiple specialized models to fully share KV caches. ICaRus achieves up to 11.1x lower P95 latency and 3.8x higher throughput in multi agent workflow with 8 different models, compared to conventional multi model system.