YOLO Object Detectors for Robotics -- a Comparative Study

arXiv cs.CV / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study evaluates multiple YOLO object detector versions to determine their suitability for detecting objects within a robot workspace using both a custom robotics dataset and COCO2017.
  • It tests detector robustness by applying image distortions to the datasets, aiming to understand how performance changes under challenging visual conditions.
  • Experiments vary training and testing configurations and compare across YOLO model variants to guide which YOLO version is most appropriate for robotic vision use cases.
  • The paper concludes that the reported results can help practitioners select a specific YOLO model for robotics tasks based on empirical performance and robustness findings.

Abstract

YOLO object detectors recently became a key component of vision systems in many domains. The family of available YOLO models consists of multiple versions, each in various variants. The research reported in this paper aims to validate the applicability of members of this family to detect objects located within the robot workspace. In our experiments, we used our custom dataset and the COCO2017 dataset. To test the robustness of investigated detectors, the images of these datasets were subject to distortions. The results of our experiments, including variations of training/testing configurations and models, may support the choice of the appropriate YOLO version for robotic vision tasks.