MetaDent: Labeling Clinical Images for Vision-Language Models in Dentistry

arXiv cs.CV / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • MetaDent addresses the lack of fine-grained, annotated intraoral datasets and benchmarks for vision-language models (VLMs) in dentistry by introducing a large multi-source clinical image dataset plus an annotation framework.
  • The resource uses an LLM-assisted “meta-labeling” approach that combines high-level image summaries with point-by-point free-text descriptions of abnormalities, producing scalable, task-agnostic representations.
  • From 60,669 curated dental images, the team fully annotates 2,588 images using the proposed hierarchical scheme and generates standardized benchmarks including ~15K VQA pairs and an 18-class multi-label classification set.
  • Human review and error analysis are used to validate that the LLM-driven labeling preserves fidelity and semantic accuracy, enabling reliable benchmark construction.
  • Evaluations across VQA, classification, and image captioning show that current state-of-the-art VLMs still struggle with fine-grained understanding of intraoral scenes and often produce inconsistent or incomplete captions, and the dataset/tools are publicly released to support reproducible research.

Abstract

Vision-Language Models (VLMs) have demonstrated significant potential in medical image analysis, yet their application in intraoral photography remains largely underexplored due to the lack of fine-grained, annotated datasets and comprehensive benchmarks. To address this, we present MetaDent, a comprehensive resource that includes (1) a novel and large-scale dentistry image dataset collected from clinical, public, and web sources; (2) a semi-structured annotation framework designed to capture the hierarchical and clinically nuanced nature of dental photography; and (3) comprehensive benchmark suites for evaluating state-of-the-art VLMs on clinical image understanding. Our labeling approach combines a high-level image summary with point-by-point, free-text descriptions of abnormalities. This method enables rich, scalable, and task-agnostic representations. We curated 60,669 dental images from diverse sources and annotated a representative subset of 2,588 images using this meta-labeling scheme. Leveraging Large Language Models (LLMs), we derive standardized benchmarks: approximately 15K Visual Question Answering (VQA) pairs and an 18-class multi-label classification dataset, which we validated with human review and error analysis to justify that the LLM-driven transition reliably preserves fidelity and semantic accuracy. We then evaluate state-of-the-art VLMs across VQA, classification, and image captioning tasks. Quantitative results reveal that even the most advanced models struggle with a fine-grained understanding of intraoral scenes, achieving moderate accuracy and producing inconsistent or incomplete descriptions in image captioning. We publicly release our dataset, annotations, and tools to foster reproducible research and accelerate the development of vision-language systems for dental applications.