Accuracy, quality, and readability analyses of responses from large language models to questions on pediatric dental sedation


Kocaoğlu M. H., Demirel A., Kaya İ.

BMC ORAL HEALTH, cilt.2025, 2026 (SCI-Expanded, Scopus)

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 2025
  • Basım Tarihi: 2026
  • Doi Numarası: 10.1186/s12903-026-08026-x
  • Dergi Adı: BMC ORAL HEALTH
  • Derginin Tarandığı İndeksler: Scopus, Science Citation Index Expanded (SCI-EXPANDED), CINAHL, MEDLINE, Directory of Open Access Journals
  • Ankara Üniversitesi Adresli: Evet

Özet

Background

Large language models (LLMs) have become increasingly integrated into healthcare communication, including dentistry. However, the extent to which they are capable of providing accurate, guideline-based information on critical topics such as pediatric dental sedation remains unclear. The aim of this study was to evaluate and compare the accuracy, content quality, and readability of responses provided by five widely used LLM-based chatbots -ChatGPT-4o, ChatGPT-3.5, Google Gemini, Microsoft Copilot, and Anthropic Claude- to clinical questions related to pediatric dental sedation.

Methods

A total of 32 clinically relevant questions covering preoperative, intraoperative, and postoperative aspects of pediatric dental sedation were presented to each chatbot. Responses were assessed independently by two blinded experts using an evidence-based grading system for accuracy, DISCERN tool for content quality. Also, readability levels were calculated using Flesch-Kincaid Grade Level formula. Data were analysed using Kruskal–Wallis tests to evaluate overall differences among chatbot groups, followed by pairwise comparisons for multiple testing. Inter-reviewer reliability for DISCERN scores was assessed using intraclass correlation coefficients. Descriptive statistics were calculated for each metric. The statistical significance level was set at 0.05.

Results

Gemini and ChatGPT-4o achieved the highest accuracy, providing the majority of their responses in full compliance with the guidelines. ChatGPT-3.5 and Claude performed moderately, while Copilot showed the lowest accuracy and highest rate of guideline deviation. In content quality, ChatGPT-4o recorded the highest mean DISCERN score (57.77), closely followed by Gemini (57.56), however no significance was detected between them (p > 0.05). Readability analysis revealed that ChatGPT-3.5 produced the most accessible content, while Claude’s responses were the most complex. Inter-rater reliability for DISCERN scoring was excellent (> 0.85) for all bots, supporting the robustness of the evaluations.

Conclusions

Despite the fact that ChatGPT-4o and Gemini exhibited superior performance in general, none of the evaluated chatbots fully aligned with clinical guidelines or consistently achieved high accuracy across all phases. These findings underscore the imperative for expert oversight when employing AI chatbots for pediatric dental sedation information. It is recommended that future research concentrate on multilingual testing, iterative dialogue-based evaluations, and domain-specific fine-tuning with a view to enhancing clinical applicability and patient safety.