ChatGPT Can Offer At Least Satisfactory Responses to Common Patient Questions Regarding Hip Arthroscopy


Özbek E. A., Ertan M. B., Kından P., Karaca M. O., Gürsoy S., Chahla J.

Arthroscopy - Journal of Arthroscopic and Related Surgery, 2024 (SCI-Expanded) identifier identifier

Özet

Purpose: To assess the accuracy of answers provided by ChatGPT 4.0 (an advanced language model developed by OpenAI) regarding 25 common patient questions about hip arthroscopy. Methods: ChatGPT 4.0 was presented with 25 common patient questions regarding hip arthroscopy with no follow-up questions and repetition. Each response was evaluated by 2 board-certified orthopaedic sports medicine surgeons independently. Responses were rated, with scores of 1, 2, 3, and 4 corresponding to “excellent response not requiring clarification,” “satisfactory requiring minimal clarification,” “satisfactory requiring moderate clarification,” and “unsatisfactory requiring substantial clarification,” respectively. Results: Twenty responses were rated “excellent” and 2 responses were rated “satisfactory requiring minimal clarification” by both of reviewers. Responses to questions “What kind of anesthesia is used for hip arthroscopy?” and “What is the average age for hip arthroscopy?” were rated as “satisfactory requiring minimal clarification” by both reviewers. None of the responses were rated as “satisfactory requiring moderate clarification” or “unsatisfactory” by either of the reviewers. Conclusions: ChatGPT 4.0 provides at least satisfactory responses to patient questions regarding hip arthroscopy. Under the supervision of an orthopaedic sports medicine surgeon, it could be used as a supplementary tool for patient education. Clinical Relevance: This study compared the answers of ChatGPT to patients’ questions regarding hip arthroscopy with the current literature. As ChatGPT has gained popularity among patients, the study aimed to find if the responses that patients get from this chatbot are compatible with the up-to-date literature.