A Comparative Analysis on the Performance of Upper Confidence Bound Algorithms in Adaptive Deep Neural Networks
arXiv cs.LG / 4/29/2026
📰 NewsModels & Research
Key Points
- The paper targets edge computing, where strict latency and energy limits require adaptive neural-network inference that balances cost against predictive accuracy.
- Building on ADNNs that use a Multi-Armed Bandit approach, it expands beyond the commonly used UCB1 strategy by adding four UCB variants: UCB-V, UCB-Tuned, UCB-Bayes, and UCB-BwK.
- The authors run a first comparative study of these UCB strategies, evaluating accuracy/latency/energy trade-offs using ResNet and MobileViT on CIFAR-10, CIFAR-10.1, and CIFAR-100.
- All tested strategies show sub-linear cumulative regret, with UCB-Bayes converging fastest, and UCB-V and UCB-Tuned producing the best Pareto-optimal accuracy–latency and accuracy–energy results.
Related Articles
LLMs will be a commodity
Reddit r/artificial

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally
Reddit r/LocalLLaMA

Dex lands $5.3M to grow its AI-driven talent matching platform
Tech.eu

AI Voice Agents in Production: What Actually Works in 2026
Dev.to

How we built a browser-based AI Pathology platform
Dev.to