关键词: ChatGPT LLM assessment benchmark data set medicine

来  源:   DOI:10.2196/57674

Abstract:
UNASSIGNED: Large language models (LLMs) have achieved great progress in natural language processing tasks and demonstrated the potential for use in clinical applications. Despite their capabilities, LLMs in the medical domain are prone to generating hallucinations (not fully reliable responses). Hallucinations in LLMs\' responses create substantial risks, potentially threatening patients\' physical safety. Thus, to perceive and prevent this safety risk, it is essential to evaluate LLMs in the medical domain and build a systematic evaluation.
UNASSIGNED: We developed a comprehensive evaluation system, MedGPTEval, composed of criteria, medical data sets in Chinese, and publicly available benchmarks.
UNASSIGNED: First, a set of evaluation criteria was designed based on a comprehensive literature review. Second, existing candidate criteria were optimized by using a Delphi method with 5 experts in medicine and engineering. Third, 3 clinical experts designed medical data sets to interact with LLMs. Finally, benchmarking experiments were conducted on the data sets. The responses generated by chatbots based on LLMs were recorded for blind evaluations by 5 licensed medical experts. The evaluation criteria that were obtained covered medical professional capabilities, social comprehensive capabilities, contextual capabilities, and computational robustness, with 16 detailed indicators. The medical data sets include 27 medical dialogues and 7 case reports in Chinese. Three chatbots were evaluated: ChatGPT by OpenAI; ERNIE Bot by Baidu, Inc; and Doctor PuJiang (Dr PJ) by Shanghai Artificial Intelligence Laboratory.
UNASSIGNED: Dr PJ outperformed ChatGPT and ERNIE Bot in the multiple-turn medical dialogues and case report scenarios. Dr PJ also outperformed ChatGPT in the semantic consistency rate and complete error rate category, indicating better robustness. However, Dr PJ had slightly lower scores in medical professional capabilities compared with ChatGPT in the multiple-turn dialogue scenario.
UNASSIGNED: MedGPTEval provides comprehensive criteria to evaluate chatbots by LLMs in the medical domain, open-source data sets, and benchmarks assessing 3 LLMs. Experimental results demonstrate that Dr PJ outperforms ChatGPT and ERNIE Bot in social and professional contexts. Therefore, such an assessment system can be easily adopted by researchers in this community to augment an open-source data set.
摘要:
大型语言模型(LLM)在自然语言处理任务方面取得了长足的进步,并展示了在临床应用中使用的潜力。尽管他们的能力,医学领域的LLM倾向于产生幻觉(不是完全可靠的反应)。LLM响应中的幻觉会产生大量风险,可能威胁患者的身体安全。因此,为了感知和预防这种安全风险,评估医学领域的LLM并建立系统评估至关重要。
我们开发了一个全面的评估系统,MedGPTEval,由标准组成,中文医疗数据集,和公开可用的基准。
首先,在综合文献综述的基础上设计了一套评价标准.第二,现有的候选标准通过使用德尔菲法与5名医学和工程专家进行优化。第三,3位临床专家设计了医疗数据集以与LLM进行交互。最后,对数据集进行了基准测试实验。由基于LLM的聊天机器人产生的响应被记录,以供5名有执照的医学专家进行盲目评估。获得的评估标准涵盖了医学专业能力,社会综合能力,上下文功能,和计算鲁棒性,有16个详细指标。医学数据集包括27个医学对话和7个中文病例报告。评估了三个聊天机器人:OpenAI的ChatGPT;百度的ERNIEBot,上海人工智能实验室的普江博士(PJ博士)。
PJ博士在多轮医疗对话和病例报告方案中的表现优于ChatGPT和ERNIEBot。PJ博士在语义一致性率和完全错误率方面也优于ChatGPT,表明更好的鲁棒性。然而,在多回合对话场景中,与ChatGPT相比,PJ博士的医学专业能力得分略低。
MedGPTEval提供了全面的标准来评估医疗领域LLM的聊天机器人,开源数据集,和基准评估3个LLM。实验结果表明,PJ博士在社会和专业环境中胜过ChatGPT和ERNIEBot。因此,这个社区的研究人员可以很容易地采用这样的评估系统来增强开源数据集。
公众号