Cloud to Edge: Benchmarking LLM Inference On Hardware-Accelerated Single-Board Computers

arXiv cs.AI / 4/29/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper addresses why running LLM inference locally on single-board computers can be challenging compared to cloud deployments, especially for privacy, latency, and cost-sensitive environments like defense and OT.
  • It argues that current edge LLM benchmarking is insufficient because it often uses CPU-only setups, covers single-board computers poorly, and relies on evaluation tasks that do not measure hardware effectiveness in a multi-dimensional way.
  • The authors propose a multi-dimensional benchmarking methodology that evaluates both inference performance and hardware efficiency across four IoT-suitable edge configurations using the latest available accelerators.
  • The results show that hardware accelerators such as NPUs and GPUs improve practical deployment trade-offs, with measurements capturing power efficiency, device size, and token throughput.
  • The study provides actionable guidance for deploying generative AI in privacy-sensitive and connectivity-limited scenarios, including unmanned vehicles and portable rugged operations.

Abstract

Large language models (LLMs) are becoming increasingly capable at small parameter scales. At the same time, conventional cloud-centric deployment introduces challenges around data privacy, latency, and cost that are acute in operational technology and defence environments. Advances in model distillation, quantisation, and affordable edge accelerators now make local LLM inference on single-board computers feasible, but the high dimensionality of the configuration space makes identifying optimal deployments difficult without structured evaluation. Existing LLM-specific edge benchmarking efforts rely on CPU-only inference, poor coverage of genuine single-board computers, and generic evaluation tasks that lack multi-dimensional assessment of hardware effectiveness. This paper proposes a multi-dimensional benchmarking methodology that jointly evaluates inference performance and hardware efficiency across four IoT-suitable edge platform configurations testing single-board computers with the latest available hardware accelerators. Our results reveal the benefits of using hardware accelerators such as NPUs and GPUs, along with multi-dimensional evaluations quantifying the trade-offs between power efficiency, physical device size and token throughput; offering practical guidance for deploying generative AI in privacy-sensitive and connectivity-limited environments such as unmanned vehicles and portable, ruggedised operations.