Dental Panoramic Radiograph Analysis Using YOLO26 From Tooth Detection to Disease Diagnosis

arXiv cs.CV / 4/20/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The study introduces an automated dental imaging pipeline using YOLOv26 for tooth detection, FDI-based tooth numbering, and disease segmentation from panoramic radiographs.
  • A DENTEX dataset was preprocessed with Roboflow (format conversion and augmentation) and used to train YOLOv26-seg variants via transfer learning on Google Colab at 800×800 resolution.
  • For tooth enumeration, the YOLOv26m-seg model achieved strong results (precision 0.976, recall 0.970, box mAP50 0.976) and improved over a YOLOv8x baseline (up to +4.9% precision and +3.3% mAP50).
  • For disease segmentation, the best model (YOLOv26l-seg) delivered moderate performance (box mAP50 0.591, mask mAP50 0.547) across four pathology classes.
  • The analysis suggests that visually distinctive impacted teeth are detected more accurately than others, and the proposed YOLOv26 framework could improve diagnostic efficiency and consistency in clinical dentistry.

Abstract

Panoramic radiography is a fundamental diagnostic tool in dentistry, offering a comprehensive view of the entire dentition with minimal radiation exposure. However, manual interpretation is time-consuming and prone to errors, especially in high-volume clinical settings. This creates a pressing need for efficient automated solutions. This study presents the first application of YOLOv26 for automated tooth detection, FDI-based numbering, and dental disease segmentation in panoramic radiographs. The DENTEX dataset was preprocessed using Roboflow for format conversion and augmentation, yielding 1,082 images for tooth enumeration and 1,040 images for disease segmentation across four pathology classes. Five YOLOv26-seg variants were trained on Google Colab using transfer learning at a resolution of 800x800. Results demonstrate that the YOLOv26m-seg model achieved the best performance for tooth enumeration, with a precision of 0.976, recall of 0.970, and box mAP50 of 0.976. It outperformed the YOLOv8x baseline by 4.9% in precision and 3.3% in mAP50, while also enabling high-quality mask-level segmentation (mask mAP50 = 0.970). For disease segmentation, the YOLOv26l-seg model attained a box mAP50 of 0.591 and a mask mAP50 of 0.547. Impacted teeth showed the highest per-class average precision (0.943), indicating that visual distinctiveness influences detection performance more than annotation quantity. Overall, these findings demonstrate that YOLOv26-based models offer a robust and accurate framework for automated dental image analysis, with strong potential to enhance diagnostic efficiency and consistency in clinical practice.