Comparison of Three Commercially Available, AI-Driven Cephalometric Analysis Tools in Orthodontics


Creative Commons License

Kazimierczak W., Gawin G., Janiszewska-Olszowska J., Dyszkiewicz-Konwinska M., Nowicki P., Kazimierczak N., ...Daha Fazla

JOURNAL OF CLINICAL MEDICINE, cilt.13, sa.13, 2024 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 13 Sayı: 13
  • Basım Tarihi: 2024
  • Doi Numarası: 10.3390/jcm13133733
  • Dergi Adı: JOURNAL OF CLINICAL MEDICINE
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Directory of Open Access Journals
  • Ankara Üniversitesi Adresli: Evet

Özet

Background: Cephalometric analysis (CA) is an indispensable diagnostic tool in orthodontics for treatment planning and outcome assessment. Manual CA is time-consuming and prone to variability. Methods: This study aims to compare the accuracy and repeatability of CA results among three commercial AI-driven programs: CephX, WebCeph, and AudaxCeph. This study involved a retrospective analysis of lateral cephalograms from a single orthodontic center. Automated CA was performed using the AI programs, focusing on common parameters defined by Downs, Ricketts, and Steiner. Repeatability was tested through 50 randomly reanalyzed cases by each software. Statistical analyses included intraclass correlation coefficients (ICC3) for agreement and the Friedman test for concordance. Results: One hundred twenty-four cephalograms were analyzed. High agreement between the AI systems was noted for most parameters (ICC3 > 0.9). Notable differences were found in the measurements of angle convexity and the occlusal plane, where discrepancies suggested different methodologies among the programs. Some analyses presented high variability in the results, indicating errors. Repeatability analysis revealed perfect agreement within each program. Conclusions: AI-driven cephalometric analysis tools demonstrate a high potential for reliable and efficient orthodontic assessments, with substantial agreement in repeated analyses. Despite this, the observed discrepancies and high variability in part of analyses underscore the need for standardization across AI platforms and the critical evaluation of automated results by clinicians, particularly in parameters with significant treatment implications.