关键词: AI chatbot ChatGPT accuracy artificial intelligence case study clinical decision support decision support diagnosis diagnostic diagnostic excellence language model large language models natural language processing vignette

来  源:   DOI:10.2196/48808   PDF(Pubmed)

Abstract:
BACKGROUND: The diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical vignettes derived from general internal medicine (GIM) department case reports is unknown.
OBJECTIVE: This study aims to evaluate the accuracy of the differential diagnosis lists generated by both third-generation ChatGPT (ChatGPT-3.5) and fourth-generation ChatGPT (ChatGPT-4) by using case vignettes from case reports published by the Department of GIM of Dokkyo Medical University Hospital, Japan.
METHODS: We searched PubMed for case reports. Upon identification, physicians selected diagnostic cases, determined the final diagnosis, and displayed them into clinical vignettes. Physicians typed the determined text with the clinical vignettes in the ChatGPT-3.5 and ChatGPT-4 prompts to generate the top 10 differential diagnoses. The ChatGPT models were not specially trained or further reinforced for this task. Three GIM physicians from other medical institutions created differential diagnosis lists by reading the same clinical vignettes. We measured the rate of correct diagnosis within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and the top diagnosis.
RESULTS: In total, 52 case reports were analyzed. The rates of correct diagnosis by ChatGPT-4 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 83% (43/52), 81% (42/52), and 60% (31/52), respectively. The rates of correct diagnosis by ChatGPT-3.5 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 73% (38/52), 65% (34/52), and 42% (22/52), respectively. The rates of correct diagnosis by ChatGPT-4 were comparable to those by physicians within the top 10 (43/52, 83% vs 39/52, 75%, respectively; P=.47) and within the top 5 (42/52, 81% vs 35/52, 67%, respectively; P=.18) differential diagnosis lists and top diagnosis (31/52, 60% vs 26/52, 50%, respectively; P=.43) although the difference was not significant. The ChatGPT models\' diagnostic accuracy did not significantly vary based on open access status or the publication date (before 2011 vs 2022).
CONCLUSIONS: This study demonstrates the potential diagnostic accuracy of differential diagnosis lists generated using ChatGPT-3.5 and ChatGPT-4 for complex clinical vignettes from case reports published by the GIM department. The rate of correct diagnoses within the top 10 and top 5 differential diagnosis lists generated by ChatGPT-4 exceeds 80%. Although derived from a limited data set of case reports from a single department, our findings highlight the potential utility of ChatGPT-4 as a supplementary tool for physicians, particularly for those affiliated with the GIM department. Further investigations should explore the diagnostic accuracy of ChatGPT by using distinct case materials beyond its training data. Such efforts will provide a comprehensive insight into the role of artificial intelligence in enhancing clinical decision-making.
摘要:
背景:人工智能聊天机器人产生的鉴别诊断的诊断准确性,包括ChatGPT模型,对于来自普通内科(GIM)部门的复杂临床小插曲,病例报告未知。
目的:本研究旨在通过使用来自Dokkyo医科大学附属医院GIM部门发表的病例报告的病例小插图,评估第三代ChatGPT(ChatGPT-3.5)和第四代ChatGPT(ChatGPT-4)产生的鉴别诊断列表的准确性。日本。
方法:我们搜索了PubMed的病例报告。识别后,医生选择诊断病例,确定最终诊断,并将它们展示在临床小插曲中。医师在ChatGPT-3.5和ChatGPT-4提示中用临床插图键入确定的文本以生成前10个鉴别诊断。ChatGPT模型没有经过专门训练或进一步加强。来自其他医疗机构的三位GIM医生通过阅读相同的临床插图来创建差异诊断列表。我们测量了前10个鉴别诊断列表中的正确诊断率,前5个鉴别诊断列表,和顶级诊断。
结果:总计,对52例病例报告进行分析。ChatGPT-4在前10个鉴别诊断列表中的正确诊断率,前5个鉴别诊断列表,最高诊断为83%(43/52),81%(42/52),和60%(31/52),分别。ChatGPT-3.5在前10个鉴别诊断列表中的正确诊断率,前5个鉴别诊断列表,最高诊断为73%(38/52),65%(34/52),和42%(22/52),分别。ChatGPT-4的正确诊断率与前10名医生的正确诊断率相当(43/52,83%vs39/52,75%,分别为;P=0.47)和前五名(42/52,81%vs35/52,67%,分别;P=.18)鉴别诊断列表和最高诊断(31/52,60%vs26/52,50%,分别;P=.43),尽管差异不显著。ChatGPT模型的诊断准确性根据开放访问状态或发布日期(2011年之前与2022年之前)没有显着差异。
结论:本研究证明了使用ChatGPT-3.5和ChatGPT-4生成的鉴别诊断列表对来自GIM部门发布的病例报告的复杂临床观察的潜在诊断准确性。ChatGPT-4产生的前10名和前5名鉴别诊断列表中的正确诊断率超过80%。尽管来自单个部门的有限病例报告数据集,我们的发现强调了ChatGPT-4作为医生补充工具的潜在效用,特别是那些隶属于GIM部门的人。进一步的调查应通过使用培训数据之外的不同案例材料来探索ChatGPT的诊断准确性。这些努力将全面了解人工智能在增强临床决策中的作用。
公众号