关键词: Artificial intelligence ChatGPT bladder cancers kidney patient information prostate

来  源:   DOI:10.1177/20552076241269538   PDF(Pubmed)

Abstract:
UNASSIGNED: To assess the quality and alignment of ChatGPT\'s cancer treatment recommendations (RECs) with National Comprehensive Cancer Network (NCCN) guidelines and expert opinions.
UNASSIGNED: Three urologists performed quantitative and qualitative assessments in October 2023 analyzing responses from ChatGPT-4 and ChatGPT-3.5 to 108 prostate, kidney, and bladder cancer prompts using two zero-shot prompt templates. Performance evaluation involved calculating five ratios: expert-approved/expert-disagreed and NCCN-aligned RECs against total ChatGPT RECs plus coverage and adherence rates to NCCN. Experts rated the response\'s quality on a 1-5 scale considering correctness, comprehensiveness, specificity, and appropriateness.
UNASSIGNED: ChatGPT-4 outperformed ChatGPT-3.5 in prostate cancer inquiries, with an average word count of 317.3 versus 124.4 (p < 0.001) and 6.1 versus 3.9 RECs (p < 0.001). Its rater-approved REC ratio (96.1% vs. 89.4%) and alignment with NCCN guidelines (76.8% vs. 49.1%, p = 0.001) were superior and scored significantly better on all quality dimensions. Across 108 prompts covering three cancers, ChatGPT-4 produced an average of 6.0 RECs per case, with an 88.5% approval rate from raters, 86.7% NCCN concordance, and only a 9.5% disagreement rate. It achieved high marks in correctness (4.5), comprehensiveness (4.4), specificity (4.0), and appropriateness (4.4). Subgroup analyses across cancer types, disease statuses, and different prompt templates were reported.
UNASSIGNED: ChatGPT-4 demonstrated significant improvement in providing accurate and detailed treatment recommendations for urological cancers in line with clinical guidelines and expert opinion. However, it is vital to recognize that AI tools are not without flaws and should be utilized with caution. ChatGPT could supplement, but not replace, personalized advice from healthcare professionals.
摘要:
评估ChatGPT的癌症治疗建议(REC)与国家综合癌症网络(NCCN)指南和专家意见的质量和一致性。
三位泌尿科医师于2023年10月进行了定量和定性评估,分析了ChatGPT-4和ChatGPT-3.5对108前列腺的反应,肾,和膀胱癌提示使用两个零射提示模板。绩效评估涉及计算五个比率:专家批准/专家不同意和NCCN对齐的RECs与总ChatGPTRECs以及NCCN的覆盖率和依从率。考虑到正确性,专家在1-5个量表上对响应的质量进行了评级,全面性,特异性,和适当性。
ChatGPT-4在前列腺癌查询中的表现优于ChatGPT-3.5,平均字数为317.3对124.4(p<0.001)和6.1对3.9REC(p<0.001)。其评估者批准的REC比率(96.1%与89.4%)并与NCCN指南保持一致(76.8%与49.1%,p=0.001)在所有质量维度上都是优异的,得分明显更好。在涵盖三种癌症的108个提示中,ChatGPT-4每例平均产生6.0个REC,评价者的支持率为88.5%,86.7%NCCN一致性,只有9.5%的分歧率。它在正确性方面取得了很高的分数(4.5),全面性(4.4),特异性(4.0),和适当性(4.4)。跨癌症类型的亚组分析,疾病状态,并报告了不同的提示模板。
ChatGPT-4在提供符合临床指南和专家意见的准确和详细的泌尿系癌症治疗建议方面表现出显著的改善。然而,认识到人工智能工具并非没有缺陷,应该谨慎使用,这一点至关重要。ChatGPT可以补充,但不能取代,来自医疗保健专业人员的个性化建议。
公众号