关键词: AI ChatGPT ChatGPT-4 accuracy artificial intelligence clinical documentation documentation documentations generation generative AI generative artificial intelligence large language model medical documentation medical note medical notes publicly available quality reproducibility simulation transcript transcripts

Mesh : Humans Physician-Patient Relations Documentation / methods Electronic Health Records Artificial Intelligence

来  源:   DOI:10.2196/54419   PDF(Pubmed)

Abstract:
BACKGROUND: Medical documentation plays a crucial role in clinical practice, facilitating accurate patient management and communication among health care professionals. However, inaccuracies in medical notes can lead to miscommunication and diagnostic errors. Additionally, the demands of documentation contribute to physician burnout. Although intermediaries like medical scribes and speech recognition software have been used to ease this burden, they have limitations in terms of accuracy and addressing provider-specific metrics. The integration of ambient artificial intelligence (AI)-powered solutions offers a promising way to improve documentation while fitting seamlessly into existing workflows.
OBJECTIVE: This study aims to assess the accuracy and quality of Subjective, Objective, Assessment, and Plan (SOAP) notes generated by ChatGPT-4, an AI model, using established transcripts of History and Physical Examination as the gold standard. We seek to identify potential errors and evaluate the model\'s performance across different categories.
METHODS: We conducted simulated patient-provider encounters representing various ambulatory specialties and transcribed the audio files. Key reportable elements were identified, and ChatGPT-4 was used to generate SOAP notes based on these transcripts. Three versions of each note were created and compared to the gold standard via chart review; errors generated from the comparison were categorized as omissions, incorrect information, or additions. We compared the accuracy of data elements across versions, transcript length, and data categories. Additionally, we assessed note quality using the Physician Documentation Quality Instrument (PDQI) scoring system.
RESULTS: Although ChatGPT-4 consistently generated SOAP-style notes, there were, on average, 23.6 errors per clinical case, with errors of omission (86%) being the most common, followed by addition errors (10.5%) and inclusion of incorrect facts (3.2%). There was significant variance between replicates of the same case, with only 52.9% of data elements reported correctly across all 3 replicates. The accuracy of data elements varied across cases, with the highest accuracy observed in the \"Objective\" section. Consequently, the measure of note quality, assessed by PDQI, demonstrated intra- and intercase variance. Finally, the accuracy of ChatGPT-4 was inversely correlated to both the transcript length (P=.05) and the number of scorable data elements (P=.05).
CONCLUSIONS: Our study reveals substantial variability in errors, accuracy, and note quality generated by ChatGPT-4. Errors were not limited to specific sections, and the inconsistency in error types across replicates complicated predictability. Transcript length and data complexity were inversely correlated with note accuracy, raising concerns about the model\'s effectiveness in handling complex medical cases. The quality and reliability of clinical notes produced by ChatGPT-4 do not meet the standards required for clinical use. Although AI holds promise in health care, caution should be exercised before widespread adoption. Further research is needed to address accuracy, variability, and potential errors. ChatGPT-4, while valuable in various applications, should not be considered a safe alternative to human-generated clinical documentation at this time.
摘要:
背景:医学文献在临床实践中起着至关重要的作用,促进准确的患者管理和卫生保健专业人员之间的沟通。然而,医疗笔记中的不准确会导致误解和诊断错误。此外,文件的要求有助于医生倦怠。尽管医疗抄写员和语音识别软件等中介已经被用来减轻这种负担,它们在准确性和解决特定于提供商的指标方面有局限性。环境人工智能(AI)支持的解决方案的集成提供了一种有希望的方式来改进文档,同时无缝地融入现有的工作流程。
目的:本研究旨在评估主观,Objective,评估,和AI模型ChatGPT-4生成的计划(SOAP)注释,使用既定的历史和体格检查成绩单作为黄金标准。我们试图识别潜在的错误,并评估不同类别的模型性能。
方法:我们进行了代表各种门诊专业的模拟患者-提供者相遇,并转录了音频文件。确定了关键的可报告元素,ChatGPT-4用于根据这些转录本生成SOAP注释。创建了每个注释的三个版本,并通过图表审查与黄金标准进行了比较;比较产生的错误被归类为遗漏,不正确的信息,或添加。我们比较了不同版本数据元素的准确性,转录本长度,和数据类别。此外,我们使用医师文档质量仪器(PDQI)评分系统评估笔记质量.
结果:尽管ChatGPT-4始终生成SOAP风格的注释,有,平均而言,23.6每个临床病例的错误,遗漏错误(86%)是最常见的,其次是添加错误(10.5%)和包含不正确的事实(3.2%)。同一案例的重复之间存在显着差异,在所有3个重复中,只有52.9%的数据元素报告正确。数据元素的准确性因案例而异,在“目标”部分中观察到最高的准确性。因此,纸币质量的衡量标准,由PDQI评估,显示了病例内和病例间的差异。最后,ChatGPT-4的准确性与转录本长度(P=.05)和可评分数据元素的数量(P=.05)呈负相关。
结论:我们的研究揭示了错误的实质性差异,准确度,和由ChatGPT-4产生的注释质量。错误不限于特定部分,和错误类型的不一致复制复杂的可预测性。成绩单长度和数据复杂度与音符准确度成反比,这引起了人们对该模式在处理复杂医疗案件中的有效性的担忧。ChatGPT-4产生的临床笔记的质量和可靠性不符合临床使用所需的标准。尽管AI在医疗保健领域充满希望,在广泛采用之前,应谨慎行事。需要进一步的研究来解决准确性问题,可变性,和潜在的错误。ChatGPT-4,虽然在各种应用中很有价值,目前不应该被认为是人类产生的临床文件的安全替代品。
公众号