Prefill-as-a-Service: KVCache of Next-Generation Models Could Go Cross-Datacenter

Reddit r/LocalLLaMA / 4/19/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The article describes “Prefill-as-a-Service,” extending prefill/decode disaggregation to work across multiple datacenters rather than a single cluster.
  • It claims that cross-datacenter execution can substantially reduce cost per token, mainly by overcoming prior limitations from KV-cache transfer overhead.
  • The approach relies on a hybrid “Kimi Linear” model that reduces KV-cache size to make cross-DC prefill/decode practical.
  • In validation on a 20x scaled-up Kimi Linear model, the proposal reports 1.54× higher throughput and 64% lower P90 TTFT, translating to cheaper token generation.
  • More technical details are referenced via an associated arXiv paper (“Prefill-as-a-Service”).
Prefill-as-a-Service: KVCache of Next-Generation Models Could Go Cross-Datacenter

Just sharing here, I'm not sure whether this is suitable/useful for Local models or not.

This is by Kimi/Moonshot. Source Tweet

We push Prefill/Decode disaggregation beyond a single cluster: cross-datacenter + heterogeneous hardware, unlocking the potential for significantly lower cost per token.

This was previously blocked by KV cache transfer overhead. The key enabler is our hybrid model (Kimi Linear), which reduces KV cache size and makes cross-DC PD practical.

Validated on a 20x scaled-up Kimi Linear model:
✅ 1.54× throughput
✅ 64% ↓ P90 TTFT
→ Directly translating into lower token cost.

More in Prefill-as-a-Service: arxiv.org/html/2604.15039v1

submitted by /u/pmttyji
[link] [comments]