AI Navigate

98$\times$ Faster LLM Routing Without a Dedicated GPU: Flash Attention, Prompt Compression, and Near-Streaming for the vLLM Semantic Router

arXiv cs.CL / 3/16/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper presents three staged optimizations that enable 98× overall speedup and a GPU footprint under 800 MB for the vLLM Semantic Router without a dedicated GPU.
  • Stage 1: a custom CK Flash Attention operator for ONNX Runtime on ROCm reduces attention memory from O(n^2) to O(n) and drops end-to-end latency from 4,918 ms to 127 ms, enabling 8K–32K token contexts.
  • Stage 2: classical NLP prompt compression (TextRank, position weighting, TF-IDF, and novelty scoring) compresses inputs to about 512 tokens, keeping latency and memory effectively constant regardless of original prompt length (127 ms to 62 ms).
  • Stage 3: near-streaming body processing with adaptive chunking and zero-copy JSON eliminates serialization overhead, lowering end-to-end latency from 62 ms to 50 ms and yielding a total routing latency around 50 ms with a 16K-token capability.

Abstract

System-level routers that intercept LLM requests for safety classification, domain routing, and PII detection must be both fast and operationally lightweight: they should add minimal latency to every request, yet not require a dedicated GPU -- an expensive resource better used for LLM inference itself. When the router co-locates on the same GPU as vLLM serving instances, standard attention's O(n^2) memory makes long-context classification (8K--32K tokens) impossible: at 8K tokens, three concurrent classifiers need {\sim}4.5\,GB for attention masks alone, far exceeding the memory left by vLLM. We present three staged optimizations for the vLLM Semantic Router, benchmarked on AMD Instinct MI300X, that solve both the latency and the memory problem. \emph{Stage~1}: a custom CK Flash Attention operator for ONNX Runtime on ROCm reduces attention memory from O(n^2) to O(n) and end-to-end (E2E) latency from 4{,}918\,ms to 127\,ms (\textbf{38.7\times}), enabling 8K--32K tokens where SDPA OOMs. \emph{Stage~2}: classical NLP prompt compression (TextRank, position weighting, TF-IDF, and novelty scoring) reduces all inputs to {\sim}512 tokens without neural inference, capping both latency and GPU memory at a constant regardless of original prompt length (E2E 127\to62\,ms, \textbf{2.0\times}). \emph{Stage~3}: near-streaming body processing with adaptive chunking and zero-copy JSON eliminates serialization overhead (E2E 62\to50\,ms, \textbf{1.2\times}). Cumulatively: \textbf{98\times} improvement (4{,}918\,ms to 50\,ms), 16K-token routing in 108\,ms, and a total router GPU footprint under 800\,MB -- small enough to share a GPU with LLM serving and removing the need for a dedicated accelerator. Stage~1 targets AMD ROCm (NVIDIA GPUs already have FlashAttention via cuDNN); Stages~2 and~3 are hardware-agnostic.