•  
  •  
 

DOI

10.1016/j.jds.2025.01.003

First Page

1110

Last Page

1117

Abstract

Abstract Background/purpose Preventive dentistry is essential for maintaining public oral health, but inequalities in dental care, especially in underserved areas, remain a significant challenge. Image-based dental analysis, using intraoral photographs, offers a practical and scalable approach to bridge this gap. In this context, we developed SegmentAnyTooth, an open-source deep learning framework that solves the critical first step by enabling automated tooth enumeration and segmentation across five standard intraoral views: upper occlusal, lower occlusal, frontal, right lateral, and left lateral. This tool lays the groundwork for advanced applications, reducing reliance on limited professional resources and enhancing access to preventive dental care. Materials and methods A dataset of 5000 intraoral photos from 1000 sets (953 subjects) was annotated with tooth surfaces and FDI notations. You Only Look Once 11 (YOLO11) nano models were trained for tooth localization and enumeration, followed by Light Segment Anything in High Quality (Light HQ-SAM) for segmentation using an active learning approach. Results SegmentAnyTooth demonstrated high segmentation accuracy, with mean Dice similarity coefficients (DSC) of 0.983 ± 0.036 for upper occlusal, 0.973 ± 0.060 for lower occlusal, and 0.920 ± 0.063 for frontal views. Lateral view models also performed well, with mean DSCs of 0.939 ± 0.070 (right) and 0.945 ± 0.056 (left). Statistically significant improvements over baseline models such as U-Net, nnU-Net, and Mask R-CNN were observed (Wilcoxon signed-rank test, P < 0.01). Conclusion SegmentAnyTooth provides accurate, multi-view tooth segmentation to enhance dental care, early diagnosis, individualized care, and population-level research. Its open-source design supports integration into clinical and public health workflows, with ongoing improvements focused on generalizability.

Share

COinS