generative AI

创成式 AI
  • 文章类型: Journal Article
    目的:该研究开发了框架,该框架利用开源的大型语言模型(LLM),使临床医生能够对患者的整个超声心动图报告历史提出简单的问题。这种方法旨在简化从多个超声心动图报告中提取临床见解的过程。特别是在患有复杂心脏病的患者中,从而提高患者护理和研究效率。
    方法:收集了超过10年的数据,包括在西奈山卫生系统存档的超过10个超声心动图的患者的超声心动图报告。这些报告被转换成每个患者的单一文件进行分析,分解为片段,并使用文本相似性度量检索相关片段。LLaMA-270B模型用于使用特制提示分析文本。该模型的性能是根据心脏病学家创建的地面实况答案进行评估的。
    结果:该研究分析了37例患者的432份报告,共100份问答对。LLM正确回答了90%的问题,时间性的准确率为83%,93%用于严重程度评估,84%用于干预识别,100%用于诊断检索。错误主要源于LLM的固有限制,比如误解数字或幻觉。
    结论:该研究证明了使用本地,用于查询和解释超声心动图报告数据的开源LLM。这种方法比传统的基于关键字的搜索有了显著的改进,实现更多上下文相关和语义上准确的反应;反过来,通过促进更有效地访问复杂的患者数据,在加强临床决策和研究方面显示出希望。
    OBJECTIVE: The study developed framework that leverages an open-source Large Language Model (LLM) to enable clinicians to ask plain-language questions about a patient\'s entire echocardiogram report history. This approach is intended to streamline the extraction of clinical insights from multiple echocardiogram reports, particularly in patients with complex cardiac diseases, thereby enhancing both patient care and research efficiency.
    METHODS: Data from over 10 years were collected, comprising echocardiogram reports from patients with more than 10 echocardiograms on file at the Mount Sinai Health System. These reports were converted into a single document per patient for analysis, broken down into snippets and relevant snippets were retrieved using text similarity measures. The LLaMA-2 70B model was employed for analyzing the text using a specially crafted prompt. The model\'s performance was evaluated against ground-truth answers created by faculty cardiologists.
    RESULTS: The study analyzed 432 reports from 37 patients for a total of 100 question-answer pairs. The LLM correctly answered 90% questions, with accuracies of 83% for temporality, 93% for severity assessment, 84% for intervention identification, and 100% for diagnosis retrieval. Errors mainly stemmed from the LLM\'s inherent limitations, such as misinterpreting numbers or hallucinations.
    CONCLUSIONS: The study demonstrates the feasibility and effectiveness of using a local, open-source LLM for querying and interpreting echocardiogram report data. This approach offers a significant improvement over traditional keyword-based searches, enabling more contextually relevant and semantically accurate responses; in turn showing promise in enhancing clinical decision-making and research by facilitating more efficient access to complex patient data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    最近的人工智能(AI)在心血管护理方面的进步提供了有效诊断的潜在增强,治疗,和结果。现在有600多种食品和药物管理局(FDA)批准的临床AI算法,10%的人专注于心血管应用,强调AI增加护理的机会越来越多。这篇综述讨论了人工智能领域的最新进展,特别关注多模态输入的利用和生成AI领域。本评论中的进一步讨论涉及一种方法来理解AI增强护理可能存在的更大背景,并讨论了严格评估的必要性,用于部署的适当基础设施,道德和公平评估,监管监督,和可行的业务部署案例。在采用这种快速发展的技术的同时,通过谨慎和以患者为中心的实施设置适当的高评估基准,对于心脏病学利用AI来增强患者护理和提供者体验至关重要。
    Recent artificial intelligence (AI) advancements in cardiovascular care offer potential enhancements in effective diagnosis, treatment, and outcomes. More than 600 U.S. Food and Drug Administration-approved clinical AI algorithms now exist, with 10% focusing on cardiovascular applications, highlighting the growing opportunities for AI to augment care. This review discusses the latest advancements in the field of AI, with a particular focus on the utilization of multimodal inputs and the field of generative AI. Further discussions in this review involve an approach to understanding the larger context in which AI-augmented care may exist, and include a discussion of the need for rigorous evaluation, appropriate infrastructure for deployment, ethics and equity assessments, regulatory oversight, and viable business cases for deployment. Embracing this rapidly evolving technology while setting an appropriately high evaluation benchmark with careful and patient-centered implementation will be crucial for cardiology to leverage AI to enhance patient care and the provider experience.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:近年来,大型语言模型在商业和消费者环境中的应用呈指数级增长。然而,在大型语言模型如何支持护理实践方面,文献中存在差距,教育,和研究。本研究旨在综合有关护理专业中大型语言模型的当前和潜在用途的现有文献。
    方法:快速回顾文献,在Cochrane快速审查方法和PRISMA报告标准的指导下,进行了。一位专家的健康馆员协助制定了广泛的纳入标准,以说明与大型语言模型有关的文献的新兴性质。三个电子数据库(即,PubMed,CINAHL,和Embase)在2023年8月进行了搜索,以确定相关文献。讨论发展的文章,使用,并纳入护理中大型语言模型的应用进行分析。
    结果:文献检索确定了总共2028篇符合纳入标准的文章。在系统地审阅摘要后,titles,和全文,最终分析包括30篇文章。几乎所有(93%;n=28)的文章都使用ChatGPT作为例子,随后讨论了大型语言模式在护理教育中的使用和价值(47%;n=14),临床实践(40%;n=12),和研究(10%;n=3)。虽然大型语言模型的最常见评估是通过人类评估进行的(26.7%;n=8),这项分析还确定了护理中大型语言模型的常见局限性,包括缺乏系统的评估,以及其他道德和法律考虑。
    结论:这是第一篇综述,旨在总结当代有关大型语言模型在护理实践中的当前和潜在用途的文献。教育,和研究。尽管存在应用大型语言模型的重要机会,在护理中使用和采用这些模式引发了一系列挑战,比如与偏见相关的伦理问题,误用,和抄袭。
    结论:鉴于大型语言模型的相对新颖性,正在努力制定和实施有意义的评估,评估,标准,并建议在护理中应用大型语言模型的指南,以确保适当的,准确,和安全使用。需要未来的研究以及临床和教育合作伙伴关系,以增强对护理和医疗保健中大型语言模型的理解和应用。
    BACKGROUND: The application of large language models across commercial and consumer contexts has grown exponentially in recent years. However, a gap exists in the literature on how large language models can support nursing practice, education, and research. This study aimed to synthesize the existing literature on current and potential uses of large language models across the nursing profession.
    METHODS: A rapid review of the literature, guided by Cochrane rapid review methodology and PRISMA reporting standards, was conducted. An expert health librarian assisted in developing broad inclusion criteria to account for the emerging nature of literature related to large language models. Three electronic databases (i.e., PubMed, CINAHL, and Embase) were searched to identify relevant literature in August 2023. Articles that discussed the development, use, and application of large language models within nursing were included for analysis.
    RESULTS: The literature search identified a total of 2028 articles that met the inclusion criteria. After systematically reviewing abstracts, titles, and full texts, 30 articles were included in the final analysis. Nearly all (93 %; n = 28) of the included articles used ChatGPT as an example, and subsequently discussed the use and value of large language models in nursing education (47 %; n = 14), clinical practice (40 %; n = 12), and research (10 %; n = 3). While the most common assessment of large language models was conducted by human evaluation (26.7 %; n = 8), this analysis also identified common limitations of large language models in nursing, including lack of systematic evaluation, as well as other ethical and legal considerations.
    CONCLUSIONS: This is the first review to summarize contemporary literature on current and potential uses of large language models in nursing practice, education, and research. Although there are significant opportunities to apply large language models, the use and adoption of these models within nursing have elicited a series of challenges, such as ethical issues related to bias, misuse, and plagiarism.
    CONCLUSIONS: Given the relative novelty of large language models, ongoing efforts to develop and implement meaningful assessments, evaluations, standards, and guidelines for applying large language models in nursing are recommended to ensure appropriate, accurate, and safe use. Future research along with clinical and educational partnerships is needed to enhance understanding and application of large language models in nursing and healthcare.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    人工智能合成生物学具有巨大的潜力,但也显著增加了生物多样性,并带来了一系列新的双重用途问题。考虑到通过结合新兴技术而设想的巨大创新,情况很复杂,人工智能合成生物学有可能将生物工程扩展到工业生物制造。然而,文献综述表明,诸如保持合理的创新范围等目标,或者更雄心勃勃地培育巨大的生物经济不一定与生物安全形成对比,但需要手牵手。本文介绍了有关问题的文献综述,并描述了跨命令与控制选项的新兴政策和实践框架,管理,自下而上,和自由放任的治理。如何实现预警系统,使实验室能够预防和缓解未来人工智能生物危害,从故意滥用,或者来自公共领域,会不断需要进化,和适应性,交互式方法应该出现。尽管Biorisk受既定治理制度的约束,科学家们普遍遵守生物安全协议,甚至是实验性的,但是科学家的合法使用可能会导致意想不到的发展。由生成AI实现的聊天机器人的最新进展重新引发了人们的担忧,即先进的生物洞察力更容易落入恶性个人或组织手中。鉴于这些问题,社会需要重新思考人工智能合成生物学应该如何管理。想象当前挑战的建议方法是打痣治理,尽管新兴的解决方案可能也没有太大不同。
    AI-enabled synthetic biology has tremendous potential but also significantly increases biorisks and brings about a new set of dual use concerns. The picture is complicated given the vast innovations envisioned to emerge by combining emerging technologies, as AI-enabled synthetic biology potentially scales up bioengineering into industrial biomanufacturing. However, the literature review indicates that goals such as maintaining a reasonable scope for innovation, or more ambitiously to foster a huge bioeconomy do not necessarily contrast with biosafety, but need to go hand in hand. This paper presents a literature review of the issues and describes emerging frameworks for policy and practice that transverse the options of command-and-control, stewardship, bottom-up, and laissez-faire governance. How to achieve early warning systems that enable prevention and mitigation of future AI-enabled biohazards from the lab, from deliberate misuse, or from the public realm, will constantly need to evolve, and adaptive, interactive approaches should emerge. Although biorisk is subject to an established governance regime, and scientists generally adhere to biosafety protocols, even experimental, but legitimate use by scientists could lead to unexpected developments. Recent advances in chatbots enabled by generative AI have revived fears that advanced biological insight can more easily get into the hands of malignant individuals or organizations. Given these sets of issues, society needs to rethink how AI-enabled synthetic biology should be governed. The suggested way to visualize the challenge at hand is whack-a-mole governance, although the emerging solutions are perhaps not so different either.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Systematic Review
    背景:用于提供与健康相关的服务(移动健康[mHealth])的移动设备的使用迅速增加,导致通过系统审查总结最新技术和实践的需求。然而,系统审查过程是一个资源密集和耗时的过程。生成人工智能(AI)已经成为自动化繁琐任务的潜在解决方案。
    目的:本研究旨在探索使用生成式AI工具在系统审查过程中自动化耗时且资源密集型任务的可行性,并评估使用此类工具的范围和局限性。
    方法:我们使用了设计科学研究方法。提出的解决方案是使用与生成AI的共同创造,比如ChatGPT,生成软件代码,使进行系统审查的过程自动化。
    结果:生成了一个触发提示,生成人工智能的帮助被用来指导发展的步骤,执行,并调试Python脚本。通过与ChatGPT的对话交换解决了代码中的错误,并创建了一个暂定脚本。该代码从GooglePlay商店中提取了mHealth解决方案,并在其描述中搜索了暗示证据库的关键字。结果导出到一个CSV文件,与其他类似系统审查过程的初始产出进行了比较。
    结论:这项研究证明了使用生成AI来自动化对mHealth应用程序进行系统评价的耗时过程的潜力。这种方法对于编码技能有限的研究人员特别有用。然而,该研究存在与设计科学研究方法相关的局限性,主观性偏见,以及用于训练语言模型的搜索结果的质量。
    BACKGROUND: The use of mobile devices for delivering health-related services (mobile health [mHealth]) has rapidly increased, leading to a demand for summarizing the state of the art and practice through systematic reviews. However, the systematic review process is a resource-intensive and time-consuming process. Generative artificial intelligence (AI) has emerged as a potential solution to automate tedious tasks.
    OBJECTIVE: This study aimed to explore the feasibility of using generative AI tools to automate time-consuming and resource-intensive tasks in a systematic review process and assess the scope and limitations of using such tools.
    METHODS: We used the design science research methodology. The solution proposed is to use cocreation with a generative AI, such as ChatGPT, to produce software code that automates the process of conducting systematic reviews.
    RESULTS: A triggering prompt was generated, and assistance from the generative AI was used to guide the steps toward developing, executing, and debugging a Python script. Errors in code were solved through conversational exchange with ChatGPT, and a tentative script was created. The code pulled the mHealth solutions from the Google Play Store and searched their descriptions for keywords that hinted toward evidence base. The results were exported to a CSV file, which was compared to the initial outputs of other similar systematic review processes.
    CONCLUSIONS: This study demonstrates the potential of using generative AI to automate the time-consuming process of conducting systematic reviews of mHealth apps. This approach could be particularly useful for researchers with limited coding skills. However, the study has limitations related to the design science research methodology, subjectivity bias, and the quality of the search results used to train the language model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:探讨聊天机器人与艾滋病预防和护理的交集。聊天机器人目前在艾滋病服务中的应用,面临的挑战,最近的进步,并对未来的研究方向进行了展望。
    结果:聊天机器人促进了关于艾滋病毒的敏感讨论,从而促进了预防和护理策略。信息的可信性和准确性被确定为影响用户与聊天机器人互动的主要因素。此外,人工智能驱动的模型将处理和生成类似人类的文本集成到聊天机器人中,在隐私方面既带来了突破,也带来了挑战,偏见,资源,和伦理问题。聊天机器人在艾滋病毒预防和护理方面显示出潜力;然而,在解决相关的道德和实际问题方面仍有重要的工作。将大型语言模型集成到聊天机器人中,是在HIV服务中有效部署的一个有希望的未来方向。鼓励未来的研究,利益相关者之间的合作,大胆的创新思维将是利用聊天机器人干预的全部潜力的关键。
    To explore the intersection of chatbots and HIV prevention and care. Current applications of chatbots in HIV services, the challenges faced, recent advancements, and future research directions are presented and discussed.
    Chatbots facilitate sensitive discussions about HIV thereby promoting prevention and care strategies. Trustworthiness and accuracy of information were identified as primary factors influencing user engagement with chatbots. Additionally, the integration of AI-driven models that process and generate human-like text into chatbots poses both breakthroughs and challenges in terms of privacy, bias, resources, and ethical issues. Chatbots in HIV prevention and care show potential; however, significant work remains in addressing associated ethical and practical concerns. The integration of large language models into chatbots is a promising future direction for their effective deployment in HIV services. Encouraging future research, collaboration among stakeholders, and bold innovative thinking will be pivotal in harnessing the full potential of chatbot interventions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:基于大型语言模型(LLM)的系统的出现,例如OpenAI的ChatGPT,在学术界引起了一系列讨论。由于LLM生成的语法正确且大部分相关(但有时完全错误,不相关或有偏见)响应提供的提示的输出,在各种写作任务中使用它们,包括撰写同行评审报告,可以提高生产率。鉴于同行评议在现有学术出版物领域的重要性,探索在同行评审中使用LLM的挑战和机遇似乎很紧迫。在使用LLM产生第一批学术产出之后,我们预计同行评审报告也将在这些系统的帮助下生成。然而,目前没有关于如何在审查任务中使用这些系统的准则。
    方法:为了调查使用LLM对同行评审过程的潜在影响,我们在Tennant和Ross-Hellauer建议的同行评审讨论中使用了五个核心主题。其中包括1)审阅者\'角色,2)编辑角色,3)同行评审的功能和质量,4)再现性,5)同行评议的社会和认知功能。我们提供了ChatGPT关于已发现问题的性能的小规模探索。
    结果:LLM有可能大幅改变同行评审者和编辑的角色。通过支持双方有效撰写建设性报告或决定信,LLM可以促进更高质量的审查,并解决审查短缺的问题。然而,LLM训练数据的基本不透明度,内部工作,数据处理,开发过程引起了人们对潜在偏见的担忧,审查报告的保密性和可重复性。此外,由于编辑工作在定义和塑造认知社区方面具有突出的功能,以及在这些社区内谈判规范框架,将这项工作部分外包给LLM可能会对学术界的社会和认知关系产生不可预见的后果。关于性能,我们在短时间内确定了主要的增强功能,并期望LLM继续发展。
    结论:我们相信法学硕士很可能对学术界和学术交流产生深远的影响。虽然对学术交流系统有潜在的好处,许多不确定性仍然存在,它们的使用并非没有风险。特别是,对现有偏见和不平等扩大在获得适当基础设施方面的担忧值得进一步关注。目前,我们建议,如果LLM被用来写学术评论和决定信,审稿人和编辑应披露他们的使用情况,并对数据安全和保密承担全部责任,和他们的报告的准确性,tone,推理和独创性。
    BACKGROUND: The emergence of systems based on large language models (LLMs) such as OpenAI\'s ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.
    METHODS: To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers\' role, 2) editors\' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT\'s performance regarding identified issues.
    RESULTS: LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs\' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.
    CONCLUSIONS: We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports\' accuracy, tone, reasoning and originality.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号