关键词: AI AI chatbots AI language model ChatGPT NLP antigen screening artificial intelligence cancer decision-making health care professional health care professionals large language model man medical information men multimetric assessment natural language processing patient education prostate prostate cancer prostate specific

Mesh : Humans Male Prostatic Neoplasms Patient Education as Topic / methods Artificial Intelligence

来  源:   DOI:10.2196/55939   PDF(Pubmed)

Abstract:
BACKGROUND: Artificial intelligence (AI) chatbots, such as ChatGPT, have made significant progress. These chatbots, particularly popular among health care professionals and patients, are transforming patient education and disease experience with personalized information. Accurate, timely patient education is crucial for informed decision-making, especially regarding prostate-specific antigen screening and treatment options. However, the accuracy and reliability of AI chatbots\' medical information must be rigorously evaluated. Studies testing ChatGPT\'s knowledge of prostate cancer are emerging, but there is a need for ongoing evaluation to ensure the quality and safety of information provided to patients.
OBJECTIVE: This study aims to evaluate the quality, accuracy, and readability of ChatGPT-4\'s responses to common prostate cancer questions posed by patients.
METHODS: Overall, 8 questions were formulated with an inductive approach based on information topics in peer-reviewed literature and Google Trends data. Adapted versions of the Patient Education Materials Assessment Tool for AI (PEMAT-AI), Global Quality Score, and DISCERN-AI tools were used by 4 independent reviewers to assess the quality of the AI responses. The 8 AI outputs were judged by 7 expert urologists, using an assessment framework developed to assess accuracy, safety, appropriateness, actionability, and effectiveness. The AI responses\' readability was assessed using established algorithms (Flesch Reading Ease score, Gunning Fog Index, Flesch-Kincaid Grade Level, The Coleman-Liau Index, and Simple Measure of Gobbledygook [SMOG] Index). A brief tool (Reference Assessment AI [REF-AI]) was developed to analyze the references provided by AI outputs, assessing for reference hallucination, relevance, and quality of references.
RESULTS: The PEMAT-AI understandability score was very good (mean 79.44%, SD 10.44%), the DISCERN-AI rating was scored as \"good\" quality (mean 13.88, SD 0.93), and the Global Quality Score was high (mean 4.46/5, SD 0.50). Natural Language Assessment Tool for AI had pooled mean accuracy of 3.96 (SD 0.91), safety of 4.32 (SD 0.86), appropriateness of 4.45 (SD 0.81), actionability of 4.05 (SD 1.15), and effectiveness of 4.09 (SD 0.98). The readability algorithm consensus was \"difficult to read\" (Flesch Reading Ease score mean 45.97, SD 8.69; Gunning Fog Index mean 14.55, SD 4.79), averaging an 11th-grade reading level, equivalent to 15- to 17-year-olds (Flesch-Kincaid Grade Level mean 12.12, SD 4.34; The Coleman-Liau Index mean 12.75, SD 1.98; SMOG Index mean 11.06, SD 3.20). REF-AI identified 2 reference hallucinations, while the majority (28/30, 93%) of references appropriately supplemented the text. Most references (26/30, 86%) were from reputable government organizations, while a handful were direct citations from scientific literature.
CONCLUSIONS: Our analysis found that ChatGPT-4 provides generally good responses to common prostate cancer queries, making it a potentially valuable tool for patient education in prostate cancer care. Objective quality assessment tools indicated that the natural language processing outputs were generally reliable and appropriate, but there is room for improvement.
摘要:
背景:人工智能(AI)聊天机器人,比如ChatGPT,取得了重大进展。这些聊天机器人,在医疗保健专业人员和患者中特别受欢迎,正在通过个性化信息改变患者教育和疾病体验。准确,及时的病人教育对于知情决策至关重要,特别是关于前列腺特异性抗原筛查和治疗方案。然而,必须严格评估人工智能聊天机器人医疗信息的准确性和可靠性。测试ChatGPT对前列腺癌知识的研究正在兴起,但需要持续评估,以确保向患者提供的信息的质量和安全性.
目的:本研究旨在评估质量,准确度,以及ChatGPT-4对患者提出的常见前列腺癌问题的反应的可读性。
方法:总的来说,根据同行评审文献中的信息主题和Google趋势数据,采用归纳方法制定了8个问题。适用于AI的患者教育材料评估工具(PEMAT-AI)的改编版本,全球质量评分,4名独立审稿人使用DISCERN-AI工具来评估AI反应的质量。这8个人工智能输出由7位泌尿科专家判断,使用开发的评估框架来评估准确性,安全,适当性,可操作性,和有效性。人工智能反应的可读性是使用既定的算法评估的(FleschReadingEase评分,GunningFogIndex,Flesch-Kincaid等级,Coleman-Liau指数,和Gobbledygook[SMOG]指数的简单度量)。开发了一个简短的工具(参考评估AI[REF-AI])来分析AI输出提供的参考,评估参考幻觉,相关性,和参考文献的质量。
结果:PEMAT-AI可理解性得分非常好(平均79.44%,SD10.44%),DISCERN-AI评分为“良好”质量(平均13.88,标准差0.93),总体质量评分较高(平均4.46/5,SD0.50)。人工智能自然语言评估工具的合并平均准确率为3.96(SD0.91),安全性为4.32(SD0.86),适当性4.45(SD0.81),可操作性为4.05(SD1.15),和有效性4.09(SD0.98)。可读性算法的共识是“难以阅读”(FleschReadingEase得分平均45.97,SD8.69;GunningFogIndex平均14.55,SD4.79),平均11年级的阅读水平,相当于15至17岁的青少年(Flesch-Kincaid等级平均12.12,SD4.34;Coleman-Liau指数平均12.75,SD1.98;SMOG指数平均11.06,SD3.20)。REF-AI识别出2种参考幻觉,而大多数参考文献(28/30,93%)适当地补充了文本。大多数参考文献(26/30,86%)来自信誉良好的政府组织,少数是科学文献的直接引用。
结论:我们的分析发现,ChatGPT-4对常见前列腺癌查询提供了普遍良好的响应,使其成为前列腺癌护理中患者教育的潜在有价值的工具。客观的质量评估工具表明,自然语言处理输出通常是可靠和适当的,但是还有改进的空间。
公众号