Benchmarking Deep Learning Models for Object Detection on Edge Computing Devices
arXiv cs.LG / 5/1/2026
💬 OpinionDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper benchmarks several state-of-the-art object detection models (YOLOv8 variants, EfficientDet Lite variants, and SSD variants) to assess how they perform on resource-constrained edge hardware.
- Models with lower accuracy (e.g., SSD MobileNet V1) tend to be more energy-efficient and faster at inference, while higher-accuracy models (e.g., YOLOv8 Medium) generally consume more power and run slower.
- Hardware accelerators can change the trade-offs, with exceptions observed when using TPUs alongside the deployed models.
- Among the tested devices, Jetson Orin Nano delivers the fastest and most energy-efficient request handling, even though it has the highest idle energy consumption.
- The study provides practical guidance on balancing accuracy (mAP), latency, and energy use when selecting both detection models and edge devices for real-time applications.
Related Articles

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Text-to-image is easy. Chaining LLMs to generate, critique, and iterate on images autonomously is a routing nightmare. AgentSwarms now supports Image generation playground and creative media workflows!
Reddit r/artificial

Automating FDA Compliance: AI for Specialty Food Producers
Dev.to

Mistral's new flagship Medium 3.5 folds chat, reasoning, and code into one model
THE DECODER
I hate this group but not literally
Reddit r/LocalLLaMA