Reward-Based Online LLM Routing via NeuralUCB

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes using NeuralUCB to perform cost-aware online routing among large language models, framing routing as a reward-driven decision problem with limited feedback.
  • It contrasts existing approaches (supervised routing vs. partial-feedback routing) and motivates how NeuralUCB can balance adaptivity and efficiency in a simulated online environment.
  • Experiments on RouterBench show the NeuralUCB routing policy achieves higher utility reward than random and min-cost baselines.
  • Compared with a max-quality reference, the method significantly reduces inference cost while keeping reward competitive, indicating a strong cost–quality tradeoff.
  • The study also notes open challenges, including action discrimination and the effectiveness of exploration under the routing setting.

Abstract

This study investigates the use of NeuralUCB for cost-aware large language model (LLM) routing. Existing routing approaches can be broadly grouped into supervised routing methods and partial-feedback methods, each with different tradeoffs in efficiency and adaptivity. We implement a NeuralUCB-based routing policy and evaluate it on RouterBench under a simulated online setting. Experimental results show that the proposed method consistently outperforms random and min-cost baselines in utility reward. Compared with the max-quality reference, our method achieves substantially lower inference cost while maintaining competitive reward. These findings suggest that NeuralUCB is a promising approach for cost-aware LLM routing, while also highlighting remaining challenges in action discrimination and exploration.