关键词: Abdominal infection Antibiotic resistance Antimicrobial stewardship Artificial intelligence Bacterial infections Blood-stream infection ChatGPT Endocarditis Infectious diseases Pneumonia

来  源:   DOI:10.1007/s15010-024-02350-6

Abstract:
OBJECTIVE: Advancements in Artificial Intelligence(AI) have made platforms like ChatGPT increasingly relevant in medicine. This study assesses ChatGPT\'s utility in addressing bacterial infection-related questions and antibiogram-based clinical cases.
METHODS: This study involved a collaborative effort involving infectious disease (ID) specialists and residents. A group of experts formulated six true/false, six open-ended questions, and six clinical cases with antibiograms for four types of infections (endocarditis, pneumonia, intra-abdominal infections, and bloodstream infection) for a total of 96 questions. The questions were submitted to four senior residents and four specialists in ID and inputted into ChatGPT-4 and a trained version of ChatGPT-4. A total of 720 responses were obtained and reviewed by a blinded panel of experts in antibiotic treatments. They evaluated the responses for accuracy and completeness, the ability to identify correct resistance mechanisms from antibiograms, and the appropriateness of antibiotics prescriptions.
RESULTS: No significant difference was noted among the four groups for true/false questions, with approximately 70% correct answers. The trained ChatGPT-4 and ChatGPT-4 offered more accurate and complete answers to the open-ended questions than both the residents and specialists. Regarding the clinical case, we observed a lower accuracy from ChatGPT-4 to recognize the correct resistance mechanism. ChatGPT-4 tended not to prescribe newer antibiotics like cefiderocol or imipenem/cilastatin/relebactam, favoring less recommended options like colistin. Both trained- ChatGPT-4 and ChatGPT-4 recommended longer than necessary treatment periods (p-value = 0.022).
CONCLUSIONS: This study highlights ChatGPT\'s capabilities and limitations in medical decision-making, specifically regarding bacterial infections and antibiogram analysis. While ChatGPT demonstrated proficiency in answering theoretical questions, it did not consistently align with expert decisions in clinical case management. Despite these limitations, the potential of ChatGPT as a supportive tool in ID education and preliminary analysis is evident. However, it should not replace expert consultation, especially in complex clinical decision-making.
摘要:
目标:人工智能(AI)的进步使得像ChatGPT这样的平台在医学中越来越重要。本研究评估了ChatGPT在解决细菌感染相关问题和基于抗菌谱的临床病例方面的实用性。
方法:这项研究涉及传染病(ID)专家和居民的合作。一组专家制定了六个真/假,六个开放式问题,和6例临床病例,针对四种类型的感染(心内膜炎,肺炎,腹腔感染,和血流感染)共96题。问题已提交给四名高级居民和四名ID专家,并输入到ChatGPT-4和经过培训的ChatGPT-4版本。总共获得了720个响应,并由抗生素治疗专家小组进行了审查。他们评估了回答的准确性和完整性,从抗菌谱中识别正确耐药机制的能力,以及抗生素处方的适当性。
结果:在真/假问题的四组中没有发现显着差异,有大约70%的正确答案。训练有素的ChatGPT-4和ChatGPT-4为开放式问题提供了比居民和专家更准确和完整的答案。关于临床病例,我们观察到ChatGPT-4识别正确耐药机制的准确性较低。ChatGPT-4倾向于不开出新的抗生素,如头孢地洛或亚胺培南/西司他丁/雷巴坦,喜欢不太推荐的选择,如粘菌素。经训练的ChatGPT-4和ChatGPT-4均推荐长于必要的治疗期(p值=0.022)。
结论:本研究强调了ChatGPT在医疗决策中的能力和局限性,特别是关于细菌感染和抗菌谱分析。虽然ChatGPT在回答理论问题方面表现出熟练的能力,在临床病例管理中,它与专家的决策并不一致.尽管有这些限制,ChatGPT作为ID教育和初步分析的支持工具的潜力是显而易见的。然而,它不应该取代专家咨询,尤其是在复杂的临床决策中。
公众号