Reward-Based Online LLM Routing via NeuralUCB
arXiv cs.CL / 4/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes using NeuralUCB to perform cost-aware online routing among large language models, framing routing as a reward-driven decision problem with limited feedback.
- It contrasts existing approaches (supervised routing vs. partial-feedback routing) and motivates how NeuralUCB can balance adaptivity and efficiency in a simulated online environment.
- Experiments on RouterBench show the NeuralUCB routing policy achieves higher utility reward than random and min-cost baselines.
- Compared with a max-quality reference, the method significantly reduces inference cost while keeping reward competitive, indicating a strong cost–quality tradeoff.
- The study also notes open challenges, including action discrimination and the effectiveness of exploration under the routing setting.
Related Articles

Knowledge Governance For The Agentic Economy.
Dev.to

AI server farms heat up the neighborhood for miles around, paper finds
The Register
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA

87.4% of My Agent's Decisions Run on a 0.8B Model
Dev.to

AIエージェントをソフトウェアチームに変える無料ツール「Paperclip」
Dev.to