Assessing ChatGPT Responses to Common Patient Questions Regarding Ankle Fractures
Assessing ChatGPT Responses to Common Patient Questions Regarding Ankle Fractures
Blog Article
Category: Trauma Introduction/Purpose: Before presenting for an orthopaedic clinical evaluation, patients have access to numerous resources on common orthopaedic injuries, their management, and related procedures.Recently, artificial intelligence (AI)-driven chatbots have provided a platform for patient engagement - encompassing various topics ranging from basic skills modules to detailed literature reviews.ChatGPT (OpenAI), a recently developed AI-based chat model, is one application that has rapidly grown in popularity and has garnered worldwide media attention.Utilizing common language communication, this technology allows patients to engage with an interface that supplies convincing, human-like responses.
Given the likelihood that patients may turn to this technology for orthopaedic and preoperative education, we sought to determine whether ChatGPT could effectively answer frequently asked BB Creams questions regarding ankle fractures.Methods: Frequently asked questions (FAQs) pertaining to ankle fractures were identified through an online search engine (Google).A compilation of commonly encountered questions regarding ankle fractures was generated, followed by a comprehensive review of all identified questions.A final set of twelve questions deemed pertinent and frequently encountered in clinical settings was determined by the authors (N.
J.& B.G.).
These twelve questions regarding ankle fractures were posed to the chatbot during a conversation thread on January 14th, 2024, without follow-up questions or any repeat queries.Each response was analyzed for accuracy utilizing an evidence-based approach.Three authors (P.J.
, K.P., & A.M.
) board-certified in orthopaedic surgery independently rated each question response from Hat ChatGPT in a blinded, sequential fashion.Ratings were designated as 1.“excellent response not requiring clarification,” 2.“satisfactory requiring minimal clarification,” 3.
“satisfactory requiring moderate clarification,” or 4.“unsatisfactory requiring substantial clarification.” Results: None of ChatGPT’s responses received an “unsatisfactory” rating from the authors.Just under half (5/12) of the responses required “minimal clarification,” with 4 “not requiring clarification.
” 3 responses were reported to require “moderate clarification.” Although several responses required nuanced clarification, the chatbot’s responses were reported to be generally unbiased and evidence-based.More complex queries were likely to receive ratings “requiring minimal clarification” or “requiring moderate clarification,” including those regarding clinical decision-making, indications for preoperative workup and surgery, benefits and drawbacks, and complications.All participants would consider using ChatGPT to improve their patient education materials and expressed willingness to use ChatGPT to help create future patient education material.
Conclusion: The ChatGPT AI chatbot may have the potential to provide evidence-based responses to questions commonly asked by patients regarding ankle fractures.ChatGPT may provide a unique and valuable clinical tool for patient education and establishing a basic understanding of ankle fractures before consultation with an orthopaedic surgeon.