Sub-Token Routing in LoRA for Adaptation and Query-Aware KV Compression
arXiv cs.LG / 4/24/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes sub-token routing within LoRA-adapted transformers as a finer-grained efficiency control than earlier coarse routing units like tokens, heads, or layers.
- It argues that, under KV retention budgets, important information is unevenly distributed both across tokens and inside tokens, so KV compression should not be treated as an all-or-nothing per-token choice.
- For language modeling, the authors introduce a query-independent method combining routed subspace LoRA with value-group routing on the KV path to improve the quality–compression tradeoff.
- For downstream tasks, they present a query-aware approach that uses a predictor-based selector to allocate a global retention budget across context token/value-group pairs conditioned on query relevance.
- Experiments indicate that query-independent routing benefits language modeling, while query-aware routing better preserves downstream behavior at reduced KV budgets, and the study shows token-level and sub-token-level routing act as complementary compression axes.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA