Large language model

大型语言模型
  • 文章类型: Journal Article
    聊天机器人,基于大型语言模型,越来越多地用于公共卫生。然而,聊天机器人响应的有效性一直存在争议,它们在近视预防和控制方面的表现尚未得到充分探索。本研究旨在评估三个著名的聊天机器人ChatGPT的有效性,克劳德,和Bard-in回应有关近视的公共卫生问题。
    关于近视的19个公共卫生问题(包括三个政策主题,基础知识和措施)由三个聊天机器人单独回应。洗牌后,每个聊天机器人响应由4名评估者独立评估全面性,准确性和相关性。
    这项研究的问题经过了可靠的测试。所有3个聊天机器人的单词计数响应之间存在显着差异。从最多到最少,订单是ChatGPT,巴德,还有克劳德.所有3个聊天机器人的综合得分都超过5分中的4分。ChatGPT在评估的所有方面得分最高。然而,所有聊天机器人都表现出缺点,比如给出捏造的回应。
    聊天机器人在公共卫生领域显示出巨大潜力,ChatGPT是最好的。未来使用聊天机器人作为公共卫生工具将需要快速开发其使用和监测标准,以及持续的研究,聊天机器人的评估和改进。
    UNASSIGNED:  Chatbots, which are based on large language models, are increasingly being used in public health. However, the effectiveness of chatbot responses has been debated, and their performance in myopia prevention and control has not been fully explored. This study aimed to evaluate the effectiveness of three well-known chatbots-ChatGPT, Claude, and Bard-in responding to public health questions about myopia.
    UNASSIGNED:  Nineteen public health questions about myopia (including three topics of policy, basics and measures) were responded individually by three chatbots. After shuffling the order, each chatbot response was independently rated by 4 raters for comprehensiveness, accuracy and relevance.
    UNASSIGNED:  The study\'s questions have undergone reliable testing. There was a significant difference among the word count responses of all 3 chatbots. From most to least, the order was ChatGPT, Bard, and Claude. All 3 chatbots had a composite score above 4 out of 5. ChatGPT scored the highest in all aspects of the assessment. However, all chatbots exhibit shortcomings, such as giving fabricated responses.
    UNASSIGNED:  Chatbots have shown great potential in public health, with ChatGPT being the best. The future use of chatbots as a public health tool will require rapid development of standards for their use and monitoring, as well as continued research, evaluation and improvement of chatbots.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    要评估四种大型语言模型(LLM)的性能-GPT-4,PaLM2,Qwen,和百川2-对中国患者关于干眼症(DED)的询问做出回应。
    两阶段研究,包括第一阶段的横截面测试和第二阶段的真实世界临床评估。
    8名获得董事会认证的眼科医生和46名DED患者。
    聊天机器人“对中国患者的反应”对DED的询问进行了评估。在第一阶段,六位资深眼科医生使用5点Likert量表在五个领域对聊天机器人的回答进行主观评价:正确性,完整性,可读性,乐于助人,和安全。使用中文可读性分析平台进行客观可读性分析。在第二阶段,46名DED代表性患者询问了在第一阶段问题中表现最佳的两种语言模型(GPT-4和百川2),然后对答案的满意度和可读性进行了评分。然后,两名高级眼科医生评估了五个领域的反应。
    五个领域的主观得分和第一阶段的客观可读性得分。患者满意度,可读性分数,以及第二阶段五个领域的主观得分。
    在第一阶段,GPT-4在五个领域表现出优异的性能(正确性:4.47;完整性:4.39;可读性:4.47;有用性:4.49;安全性:4.47,p<0.05)。然而,可读性分析表明,GPT-4的反应是高度复杂的,平均得分为12.86(p<0.05),而Qwen的得分为10.87、11.53和11.26,分别为百川2和PaLM2。在第二阶段,如五个领域的分数所示,GPT-4和百川2均擅长回答DED患者提出的问题。然而,百川2的回答的完整性相对较差(4.04与GPT-4为4.48,p<0.05)。然而,百川2的建议比GPT-4的建议更容易理解(患者可读性:3.91vs.4.61,p<0.05;眼科医生可读性:2.67vs.4.33).
    这些发现强调了法学硕士的潜力,特别是GPT-4和百川2,对中国患者关于DED的问题提供准确和全面的回答。
    UNASSIGNED: To evaluate the performance of four large language models (LLMs)-GPT-4, PaLM 2, Qwen, and Baichuan 2-in generating responses to inquiries from Chinese patients about dry eye disease (DED).
    UNASSIGNED: Two-phase study, including a cross-sectional test in the first phase and a real-world clinical assessment in the second phase.
    UNASSIGNED: Eight board-certified ophthalmologists and 46 patients with DED.
    UNASSIGNED: The chatbots\' responses to Chinese patients\' inquiries about DED were assessed by the evaluation. In the first phase, six senior ophthalmologists subjectively rated the chatbots\' responses using a 5-point Likert scale across five domains: correctness, completeness, readability, helpfulness, and safety. Objective readability analysis was performed using a Chinese readability analysis platform. In the second phase, 46 representative patients with DED asked the two language models (GPT-4 and Baichuan 2) that performed best in the in the first phase questions and then rated the answers for satisfaction and readability. Two senior ophthalmologists then assessed the responses across the five domains.
    UNASSIGNED: Subjective scores for the five domains and objective readability scores in the first phase. The patient satisfaction, readability scores, and subjective scores for the five-domains in the second phase.
    UNASSIGNED: In the first phase, GPT-4 exhibited superior performance across the five domains (correctness: 4.47; completeness: 4.39; readability: 4.47; helpfulness: 4.49; safety: 4.47, p < 0.05). However, the readability analysis revealed that GPT-4\'s responses were highly complex, with an average score of 12.86 (p < 0.05) compared to scores of 10.87, 11.53, and 11.26 for Qwen, Baichuan 2, and PaLM 2, respectively. In the second phase, as shown by the scores for the five domains, both GPT-4 and Baichuan 2 were adept in answering questions posed by patients with DED. However, the completeness of Baichuan 2\'s responses was relatively poor (4.04 vs. 4.48 for GPT-4, p < 0.05). Nevertheless, Baichuan 2\'s recommendations more comprehensible than those of GPT-4 (patient readability: 3.91 vs. 4.61, p < 0.05; ophthalmologist readability: 2.67 vs. 4.33).
    UNASSIGNED: The findings underscore the potential of LLMs, particularly that of GPT-4 and Baichuan 2, in delivering accurate and comprehensive responses to questions from Chinese patients about DED.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:人工智能(AI)和大型语言模型(LLM)的最新进展在医学领域显示出潜力,包括皮肤病学。随着LLM中图像分析功能的引入,它们在皮肤病学诊断中的应用引起了极大的兴趣。这些功能是通过将计算机视觉技术集成到LLM的底层体系结构中而实现的。
    目的:本研究旨在比较Claude3Opus和ChatGPT与GPT-4在分析皮肤镜图像以进行黑色素瘤检测方面的诊断性能。提供洞察他们的优势和局限性。
    方法:我们随机选择了100个组织病理学证实的皮肤镜图像(50个恶性,50良性)来自国际皮肤成像合作组织(ISIC)档案,使用计算机生成的随机化过程。之所以选择ISIC档案,是因为它收集了全面且注释齐全的皮肤图像,确保多样化和代表性的样本。如果是经组织病理学证实的黑素细胞病变的皮肤镜图像,则包括图像。每个模型都给出了相同的提示,指示它为每张图像提供前3个鉴别诊断,按可能性排序。初级诊断准确性,前3种鉴别诊断的准确性,并评估恶性肿瘤的辨别能力。选择McNemar测试来比较2种型号的诊断性能,因为它适合分析配对的标称数据。
    结果:在主要诊断中,克劳德3Opus实现了54.9%的灵敏度(95%CI44.08%-65.37%),57.14%特异性(95%CI46.31%-67.46%),和56%的准确率(95%CI46.22%-65.42%),而ChatGPT表现出56.86%的敏感性(95%CI45.99%-67.21%),特异性38.78%(95%CI28.77%-49.59%),准确率为48%(95%CI38.37%-57.75%)。McNemar检验显示两种模型之间没有显着差异(P=0.17)。对于前3个鉴别诊断,Claude3Opus和ChatGPT包括76%(95%CI66.33%-83.77%)和78%(95%CI68.46%-85.45%)的病例的正确诊断,分别。McNemar检验无显著性差异(P=0.56)。在恶性肿瘤歧视中,Claude3Opus的表现优于ChatGPT,灵敏度为47.06%,81.63%特异性,准确率为64%,与45.1%相比,42.86%,44%,分别。McNemar检验显示差异显著(P<.001)。Claude3Opus在区分恶性肿瘤方面的比值比为3.951(95%CI1.685-9.263),而ChatGPT-4的比值比为0.616(95%CI0.297-1.278)。
    结论:我们的研究强调了LLM在协助皮肤科医生方面的潜力,但也揭示了其局限性。两种模型在诊断黑色素瘤和良性病变时都出错。这些发现强调了开发健壮,透明,以及通过人工智能研究人员之间的协作努力进行临床验证的人工智能模型,皮肤科医生,和其他医疗保健专业人员。虽然AI可以提供有价值的见解,它还不能取代训练有素的临床医生的专业知识。
    BACKGROUND: Recent advancements in artificial intelligence (AI) and large language models (LLMs) have shown potential in medical fields, including dermatology. With the introduction of image analysis capabilities in LLMs, their application in dermatological diagnostics has garnered significant interest. These capabilities are enabled by the integration of computer vision techniques into the underlying architecture of LLMs.
    OBJECTIVE: This study aimed to compare the diagnostic performance of Claude 3 Opus and ChatGPT with GPT-4 in analyzing dermoscopic images for melanoma detection, providing insights into their strengths and limitations.
    METHODS: We randomly selected 100 histopathology-confirmed dermoscopic images (50 malignant, 50 benign) from the International Skin Imaging Collaboration (ISIC) archive using a computer-generated randomization process. The ISIC archive was chosen due to its comprehensive and well-annotated collection of dermoscopic images, ensuring a diverse and representative sample. Images were included if they were dermoscopic images of melanocytic lesions with histopathologically confirmed diagnoses. Each model was given the same prompt, instructing it to provide the top 3 differential diagnoses for each image, ranked by likelihood. Primary diagnosis accuracy, accuracy of the top 3 differential diagnoses, and malignancy discrimination ability were assessed. The McNemar test was chosen to compare the diagnostic performance of the 2 models, as it is suitable for analyzing paired nominal data.
    RESULTS: In the primary diagnosis, Claude 3 Opus achieved 54.9% sensitivity (95% CI 44.08%-65.37%), 57.14% specificity (95% CI 46.31%-67.46%), and 56% accuracy (95% CI 46.22%-65.42%), while ChatGPT demonstrated 56.86% sensitivity (95% CI 45.99%-67.21%), 38.78% specificity (95% CI 28.77%-49.59%), and 48% accuracy (95% CI 38.37%-57.75%). The McNemar test showed no significant difference between the 2 models (P=.17). For the top 3 differential diagnoses, Claude 3 Opus and ChatGPT included the correct diagnosis in 76% (95% CI 66.33%-83.77%) and 78% (95% CI 68.46%-85.45%) of cases, respectively. The McNemar test showed no significant difference (P=.56). In malignancy discrimination, Claude 3 Opus outperformed ChatGPT with 47.06% sensitivity, 81.63% specificity, and 64% accuracy, compared to 45.1%, 42.86%, and 44%, respectively. The McNemar test showed a significant difference (P<.001). Claude 3 Opus had an odds ratio of 3.951 (95% CI 1.685-9.263) in discriminating malignancy, while ChatGPT-4 had an odds ratio of 0.616 (95% CI 0.297-1.278).
    CONCLUSIONS: Our study highlights the potential of LLMs in assisting dermatologists but also reveals their limitations. Both models made errors in diagnosing melanoma and benign lesions. These findings underscore the need for developing robust, transparent, and clinically validated AI models through collaborative efforts between AI researchers, dermatologists, and other health care professionals. While AI can provide valuable insights, it cannot yet replace the expertise of trained clinicians.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    颈椎病是现代社会中最常见的退行性脊柱疾病。病人需要大量的医学知识,和大型语言模型(LLM)为患者提供了一种新颖便捷的获取医疗建议的工具。在这项研究中,我们收集了颈椎病患者在临床工作和网络咨询中最常见的问题。LLM提供的答案的准确性由3名经验丰富的脊柱外科医生进行评估和评分。响应的比较分析表明,所有LLM都能提供令人满意的结果,在他们中间,GPT-4的准确率最高。所有LLM中每个部分的差异揭示了它们的能力边界和人工智能的发展方向。
    Cervical spondylosis is the most common degenerative spinal disorder in modern societies. Patients require a great deal of medical knowledge, and large language models (LLMs) offer patients a novel and convenient tool for accessing medical advice. In this study, we collected the most frequently asked questions by patients with cervical spondylosis in clinical work and internet consultations. The accuracy of the answers provided by LLMs was evaluated and graded by 3 experienced spinal surgeons. Comparative analysis of responses showed that all LLMs could provide satisfactory results, and that among them, GPT-4 had the highest accuracy rate. Variation across each section in all LLMs revealed their ability boundaries and the development direction of artificial intelligence.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    大型语言模型(LLM),比如ChatGPT,在各种任务中表现出令人印象深刻的功能,并作为许多领域的自然语言界面引起了越来越多的兴趣。最近,大型视觉语言模型(VLM),可从图像-文本对中学习丰富的视觉-语言相关性,像BLIP-2和GPT-4一样,已经被深入研究。然而,尽管有这些发展,LLM和VLM在图像质量评估(IQA)中的应用,特别是在医学成像方面,仍未探索。这对于客观性能评估和潜在的补充甚至替代放射科医师的意见是有价值的。为此,这项研究介绍了IQAGPT,一种创新的计算机断层扫描(CT)IQA系统,该系统将图像质量字幕VLM与ChatGPT集成在一起,以生成质量评分和文本报告。首先,包含1,000个具有不同质量水平的CT切片的CT-IQA数据集经专业注释和编译用于训练和评估。为了更好地利用LLM的功能,使用提示模板将带注释的质量分数转换为语义丰富的文本描述。第二,图像质量字幕VLM在CT-IQA数据集上进行微调以生成质量描述。字幕模型通过跨模态注意力融合图像和文本特征。第三,根据质量描述,用户口头要求ChatGPT对图像质量评分或生成放射学质量报告。结果证明了使用LLM评估图像质量的可行性。提出的IQAGPT优于GPT-4和CLIP-IQA,以及仅依赖图像的多任务分类和回归模型。
    Large language models (LLMs), such as ChatGPT, have demonstrated impressive capabilities in various tasks and attracted increasing interest as a natural language interface across many domains. Recently, large vision-language models (VLMs) that learn rich vision-language correlation from image-text pairs, like BLIP-2 and GPT-4, have been intensively investigated. However, despite these developments, the application of LLMs and VLMs in image quality assessment (IQA), particularly in medical imaging, remains unexplored. This is valuable for objective performance evaluation and potential supplement or even replacement of radiologists\' opinions. To this end, this study introduces IQAGPT, an innovative computed tomography (CT) IQA system that integrates image-quality captioning VLM with ChatGPT to generate quality scores and textual reports. First, a CT-IQA dataset comprising 1,000 CT slices with diverse quality levels is professionally annotated and compiled for training and evaluation. To better leverage the capabilities of LLMs, the annotated quality scores are converted into semantically rich text descriptions using a prompt template. Second, the image-quality captioning VLM is fine-tuned on the CT-IQA dataset to generate quality descriptions. The captioning model fuses image and text features through cross-modal attention. Third, based on the quality descriptions, users verbally request ChatGPT to rate image-quality scores or produce radiological quality reports. Results demonstrate the feasibility of assessing image quality using LLMs. The proposed IQAGPT outperformed GPT-4 and CLIP-IQA, as well as multitask classification and regression models that solely rely on images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    信使RNA(mRNAs)的亚细胞定位是生物分子的一个关键方面,与基因调控和蛋白质合成紧密相连,并为生物医学领域的疾病诊断和药物开发提供创新见解。已经提出了几种计算方法来预测细胞内mRNA的亚细胞定位。然而,这些预测的准确性仍然存在缺陷。在这项研究中,我们提出了一种基于梯度提升树算法的mRCat预测器,专门用于预测mRNA是否位于细胞核或细胞质中。该预测器首先使用大型语言模型来彻底探索序列中的隐藏信息,然后整合传统的序列特征来共同表征mRNA基因序列。最后,它采用CatBoost作为预测mRNA亚细胞定位的基础分类器。对独立测试集的实验验证表明,mRCat的准确性为0.761,F1评分为0.710,MCC为0.511,AUROC为0.751。结果表明,与其他最先进的方法相比,我们的方法具有更高的准确性和鲁棒性。预计将为生物分子研究提供深刻的见解。
    The subcellular localization of messenger RNAs (mRNAs) is a pivotal aspect of biomolecules, tightly linked to gene regulation and protein synthesis, and offers innovative insights into disease diagnosis and drug development in the field of biomedicine. Several computational methods have been proposed to predict the subcellular localization of mRNAs within cells. However, there remains a deficiency in the accuracy of these predictions. In this study, we propose an mRCat predictor based on the gradient boosting tree algorithm specifically to predict whether mRNAs are localized in the nucleus or in the cytoplasm. This predictor firstly uses large language models to thoroughly explore hidden information within sequences and then integrates traditional sequence features to collectively characterize mRNA gene sequences. Finally, it employs CatBoost as the base classifier for predicting the subcellular localization of mRNAs. The experimental validation on an independent test set demonstrates that mRCat obtained accuracy of 0.761, F1 score of 0.710, MCC of 0.511, and AUROC of 0.751. The results indicate that our method has higher accuracy and robustness compared to other state-of-the-art methods. It is anticipated to offer deep insights for biomolecular research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    大型语言模型(LLM),基于深度学习的自然语言处理技术,目前在聚光灯下。这些模型紧密地模仿自然语言的理解和生成。它们的进化经历了几波类似于卷积神经网络的创新。生成人工智能的变压器架构进步标志着超越早期模式识别通过监督学习的巨大飞跃。随着参数和训练数据(TB)的扩展,LLM揭示了非凡的人类互动,包括记忆保留和理解等能力。这些进步使LLM特别适合在医生和患者之间的医疗保健沟通中发挥作用。在这篇全面的综述中,我们讨论了LLM的轨迹以及对临床医生和患者的潜在影响。对于临床医生来说,LLM可用于自动医疗文档,并给予更好的投入和广泛的验证,LLM将来可能能够自主诊断和治疗。对于患者护理,LLM可用于分诊建议,医疗文件的总结,病人病情的解释,并根据患者的理解水平定制患者教育材料。还介绍了LLM的局限性和现实世界使用的可能解决方案。鉴于这一领域的快速发展,这篇综述试图简要介绍LLM在眼科空间中可能扮演的许多角色,专注于提高医疗保健质量。
    Large language models (LLMs), a natural language processing technology based on deep learning, are currently in the spotlight. These models closely mimic natural language comprehension and generation. Their evolution has undergone several waves of innovation similar to convolutional neural networks. The transformer architecture advancement in generative artificial intelligence marks a monumental leap beyond early-stage pattern recognition via supervised learning. With the expansion of parameters and training data (terabytes), LLMs unveil remarkable human interactivity, encompassing capabilities such as memory retention and comprehension. These advances make LLMs particularly well-suited for roles in healthcare communication between medical practitioners and patients. In this comprehensive review, we discuss the trajectory of LLMs and their potential implications for clinicians and patients. For clinicians, LLMs can be used for automated medical documentation, and given better inputs and extensive validation, LLMs may be able to autonomously diagnose and treat in the future. For patient care, LLMs can be used for triage suggestions, summarization of medical documents, explanation of a patient\'s condition, and customizing patient education materials tailored to their comprehension level. The limitations of LLMs and possible solutions for real-world use are also presented. Given the rapid advancements in this area, this review attempts to briefly cover many roles that LLMs may play in the ophthalmic space, with a focus on improving the quality of healthcare delivery.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本研究旨在调查护理本科生对ChatGPT的认知和使用情况。使用结构化问卷评估ChatGPT的认知和使用情况。问卷是2021年入学的护理本科生。调查的应答率为90.2%(46/51)。在受访者中,45名学生知道ChatGPT,只有一个学生不知道ChatGPT。在使用方面,23名学生回答。其中,16使用ChatGPT来增强他们的学习经验,六为家庭作业,五个聊天,四篇论文写作,一个用于其他目的。这项研究为更好地在护理教育中使用ChatGPT提供了有价值的见解。
    The study aimed to investigate the awareness and use of ChatGPT among undergraduate nursing students. A structured questionnaire was used to assess the awareness and use of ChatGPT. The questionnaire was undergraduate nursing students enrolled in 2021. The response rate to the survey was 90.2% (46/51). Of the respondents, 45 students were aware of ChatGPT, and only one student was not aware of ChatGPT. In terms of usage, 23 students responded. Among them, 16 used ChatGPT to enhance their learning experience, six for homework, five for chatting, four for essay writing, and one for other purposes. This study provides valuable insights for the better use of ChatGPT in nursing education.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    这项研究的目的是评估护士对ChatGPT的认识和使用。这项研究于2023年10月进行,对华西医院护理教育计划的124名护士进行了在线问卷调查。问卷包括参与者的人口统计信息,ChatGPT的意识,以及使用它的实际经验。共有57.3%(71/124)的护士完成了调查。其中,56.3%(40/71)知道ChatGPT,43.7%(31/71)不知道ChatGPT。在使用方面,在使用ChatGPT的20人中,13用于学习,10为论文写作,五个用于研究,两个用于聊天。这项研究强调了ChatGPT在提高护士专业能力和有效性方面的潜力。进一步的研究将集中在如何更有效地使用ChatGPT来支持护士的专业发展和成长。
    The aim of this study was to assess nurses\' awareness and use of ChatGPT. The study was conducted in October 2023 with an online questionnaire for 124 nurses in the nursing education programme at West China Hospital. The questionnaire included participants\' demographic information, awareness of ChatGPT, and actual experience of using it. A total of 57.3% (71/124) of the nurses completed the survey. Of these, 56.3% (40/71) were aware of ChatGPT and 43.7% (31/71) were not aware of ChatGPT. In terms of use, of the 20 who used ChatGPT, 13 used it for studying, 10 for essay writing, five for research and two for chatting. This study highlights the potential of ChatGPT to improve nurses\' professional competence and effectiveness. Further research will focus on how ChatGPT can be used more effectively to support nurses\' professional development and growth.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    随着老化过程的加速,老年人慢性病的发病率正在上升。因此,优化老年人健康教育至关重要。肺吸入性和吸入性肺炎是危害老年人健康的重大问题。现在用来预防老年人肺误吸的健康教育范式有很多缺陷,包括缺乏以家庭为基础的健康教育和数字鸿沟。大型语言模型(LLM)人工智能技术的一个例子,预计将有机会解决这些问题,并为预防老年人的肺误吸提供易于理解的健康信息。我们的多学科研究团队从医生的角度充分理解需求,护士和病人,建立了一个知识图谱(KG),并开发了基于LLM的智能健康教育系统,用于预防老年人肺误吸(iHEAL-ePA系统)。
    As the aging process accelerates, the incidence of chronic diseases in the elderly is rising. As a result, it is crucial to optimize health education for the elderly. Pulmonary aspiration and aspiration pneumonia are significant concerns endangering the health of the elderly. The health education paradigm now in use to prevent pulmonary aspiration in the elderly has numerous flaws, including a lack of home-based health education and the digital divide. Large language model (LLM), an example of artificial intelligence technology, is anticipated to bring a chance to address these issues and offer easily comprehensible health information for the prevention of pulmonary aspiration in the elderly. Our multidisciplinary research team fully understood the needs from the perspective of physicians, nurses and patients, built a knowledge graph (KG), and developed an intelligent Health EducAtion system based on LLM for the prevention of elderly Pulmonary Aspiration (iHEAL-ePA system).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号