A Comparative Study of Modern Object Detectors for Robust Apple Detection in Orchard Imagery
arXiv cs.CV / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses robust single-class apple detection in orchard imagery by accounting for challenging conditions such as illumination changes, leaf clutter, dense fruit clusters, and partial occlusion.
- It introduces a controlled and reproducible benchmark on the public AppleBBCH81 dataset with one fixed train/validation/test split and a unified evaluation protocol across six detectors (YOLOv10n, YOLO11n, RT-DETR-L, Faster R-CNN, FCOS, and SSDLite320).
- Using COCO-style metrics (mAP@0.5 and mAP@0.5:0.95), YOLO11n performs best on strict localization on the validation split (mAP@0.5:0.95 = 0.6065; mAP@0.5 = 0.9620).
- The study also shows that threshold-dependent behavior matters for deployment: at a low-confidence operating point (confidence >= 0.05), YOLOv10n achieves the highest F1-score, while RT-DETR-L shows high recall but many false positives (low precision).
- Overall, the results recommend selecting detectors based not only on localization accuracy but also on threshold robustness to match downstream requirements like counting, yield prediction, or robotic harvesting.
Related Articles

Black Hat Asia
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Don't forget, there is more than forgetting: new metrics for Continual Learning
Dev.to

Microsoft MAI-Image-2-Efficient Review 2026: The AI Image Model Built for Production Scale
Dev.to
Bit of a strange question?
Reddit r/artificial