StepCache: Step-Level Reuse with Lightweight Verification and Selective Patching for LLM Serving

arXiv cs.AI / 4/1/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • StepCache is a backend-agnostic, step-level reuse layer for LLM serving that reuses cached request “steps” when prompts share a solution structure but differ in localized constraints (e.g., schema, names, constants).
  • It retrieves the best-matching cached request, verifies each reused step with lightweight, task-aware checks, and selectively regenerates only the regions that fail via selective patching.
  • StepCache supports strict structured-output enforcement for JSON (including required-key constraints and one-shot repair) plus conservative skip-reuse fallbacks when semantic changes are detected.
  • For tasks like linear equations, it integrates verification into a bounded correction/repair loop with a deterministic fallback to guarantee correctness even if the backend model fails.
  • In CPU-only perturbation-heavy micro-benchmarks (math and JSON variants), StepCache cuts mean/median/p95 latency and token usage substantially while improving correctness from 72.5% to 100% under task-specific stitched integrity checks.

Abstract

We address LLM serving workloads where repeated requests share a common solution structure but differ in localized constraints, such as output schema, variable names, or numeric constants. Prior caching approaches typically reuse either full responses (semantic caching) or model-internal KV/prefix states, which are respectively brittle under partial changes or tightly coupled to specific backends. We present StepCache, a backend-agnostic step-level reuse layer that segments outputs into ordered steps, retrieves the best-matching cached request, verifies steps using lightweight task-aware checks, and regenerates only failing regions via selective patching. StepCache additionally supports strict structured-output enforcement for JSON, including single-step extraction, required-key constraints, and one-shot repair, as well as conservative skip-reuse fallbacks for semantic changes. For linear equations, StepCache promotes verification into correction via a bounded repair loop with a deterministic fallback that guarantees correctness when the backend model fails. In a CPU-only perturbation-heavy micro-benchmark on math and JSON variants, averaged over three seeds, StepCache reduces mean latency from 2.13 s to 0.67 s, median latency from 2.42 s to 0.01 s, and p95 latency from 3.38 s to 3.30 s. It also reduces total token usage from 36.1k to 27.3k and improves end-to-end correctness from 72.5% to 100% under task-specific checks and a stitched-output integrity check. Across requests, 79.7% take the reuse-only fast path, 5.4% require patching, and 14.9% trigger skip-reuse.