Foot and Ankle Surgery, cilt.32, sa.3, ss.290-295, 2026 (SCI-Expanded, Scopus)
Background This study investigates the quality, accuracy, and readability of ChatGPT's responses to common patient inquiries regarding hallux rigidus. Methods Twenty-five patient questions were directed to ChatGPT and analyzed. The DISCERN criteria assessed information quality, while the method by Mika et al. evaluated response accuracy. Questions were classified per Rothwell classification, and readability was evaluated using Flesch–Kincaid, Gunning Fog, Coleman–Liau, and SMOG indices. Results The mean DISCERN score was 50.26 (fair), and the Mika et al. score was 2.04 (satisfactory requiring minimal clarification). According to the Rothwell classification, 72 % of the questions were in the Fact group. The mean readability corresponded to 11.3 years of education. Conclusions ChatGPT provides partially satisfactory information about hallux rigidus in general at a high reading level. More detailed content should include surgical classifications, biomechanical details, and level of evidence. With these aspects, ChatGPT might be considered a supportive tool in patient education. Levels of evidence None.