Benchmarking Deep Learning Models for Object Detection on Edge Computing Devices

arXiv cs.LG / 5/1/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper benchmarks several state-of-the-art object detection models (YOLOv8 variants, EfficientDet Lite variants, and SSD variants) to assess how they perform on resource-constrained edge hardware.
  • Models with lower accuracy (e.g., SSD MobileNet V1) tend to be more energy-efficient and faster at inference, while higher-accuracy models (e.g., YOLOv8 Medium) generally consume more power and run slower.
  • Hardware accelerators can change the trade-offs, with exceptions observed when using TPUs alongside the deployed models.
  • Among the tested devices, Jetson Orin Nano delivers the fastest and most energy-efficient request handling, even though it has the highest idle energy consumption.
  • The study provides practical guidance on balancing accuracy (mAP), latency, and energy use when selecting both detection models and edge devices for real-time applications.

Abstract

Modern applications, such as autonomous vehicles, require deploying deep learning algorithms on resource-constrained edge devices for real-time image and video processing. However, there is limited understanding of the efficiency and performance of various object detection models on these devices. In this paper, we evaluate state-of-the-art object detection models, including YOLOv8 (Nano, Small, Medium), EfficientDet Lite (Lite0, Lite1, Lite2), and SSD (SSD MobileNet V1, SSDLite MobileDet). We deployed these models on popular edge devices like the Raspberry Pi 3, 4, and 5 with/without TPU accelerators, and Jetson Orin Nano, collecting key performance metrics such as energy consumption, inference time, and Mean Average Precision (mAP). Our findings highlight that lower mAP models such as SSD MobileNet V1 are more energy-efficient and faster in inference, whereas higher mAP models like YOLOv8 Medium generally consume more energy and have slower inference, though with exceptions when accelerators like TPUs are used. Among the edge devices, Jetson Orin Nano stands out as the fastest and most energy-efficient option for request handling, despite having the highest idle energy consumption. These results emphasize the need to balance accuracy, speed, and energy efficiency when deploying deep learning models on edge devices, offering valuable guidance for practitioners and researchers selecting models and devices for their applications.