AI implementation

  • 文章类型: Journal Article
    背景:人工智能(AI)在增强临床和行政医疗保健任务方面具有巨大潜力。然而,缓慢的采用和实施挑战凸显了需要考虑人类如何在更广泛的医疗保健社会技术系统中有效地与人工智能合作。
    目的:以重症监护病房(ICU)为例,我们比较了数据科学家和临床医生对人类和人工智能能力的最佳利用的评估,方法是确定合适的人类-人工智能团队,以安全和有意义地增强或自动化6项核心任务。目标是为政策制定者和医疗保健从业人员提供有关AI设计和实施的可行建议。
    方法:在这项多方法研究中,我们将6个ICU的系统任务分析与来自工业界和学术界的19名健康数据科学家和61名ICU临床医生(25名医生和36名护士)的国际Delphi调查相结合,以定义和评估最佳的人-AI合作水平(1级=无绩效收益;2级=AI增强人类绩效;3级=人类增强AI绩效;4级=AI无需人工投入).利益相关者团体还考虑了道德和社会影响。
    结果:两个利益相关者团体都选择了2级和3级人类-AI团队来完成ICU中6个核心任务中的4个。对于一项任务(监控),4级是首选的设计选择。对于患者互动的任务,数据科学家和临床医生都同意,由于医患关系和护患关系的重要性以及道德问题,无论技术可行性如何,都不应使用AI。人类人工智能设计选择依赖于可解释性,可预测性,以及对AI系统的控制。如果不满足这些条件,并且AI的性能低于人类水平的可靠性,建议将责任降低到1级,或将责任从人类最终用户转移出去。如果人工智能的性能达到或超过人类水平的可靠性,而这些条件不满足,应考虑转移到4级自动化,以确保安全和高效的人类-AI团队合作。
    结论:通过考虑社会技术系统并确定适当的人类-AI团队水平,我们的研究展示了在ICU和更广泛的医疗保健环境中提高AI使用安全性和有效性的潜力.监管措施应优先考虑可解释性,可预测性,并控制临床医生是否承担全部责任。必须仔细评估道德和社会影响,以确保人类和人工智能之间的有效合作。特别是考虑到生成人工智能的最新进展。
    BACKGROUND: Artificial intelligence (AI) holds immense potential for enhancing clinical and administrative health care tasks. However, slow adoption and implementation challenges highlight the need to consider how humans can effectively collaborate with AI within broader socio-technical systems in health care.
    OBJECTIVE: In the example of intensive care units (ICUs), we compare data scientists\' and clinicians\' assessments of the optimal utilization of human and AI capabilities by determining suitable levels of human-AI teaming for safely and meaningfully augmenting or automating 6 core tasks. The goal is to provide actionable recommendations for policy makers and health care practitioners regarding AI design and implementation.
    METHODS: In this multimethod study, we combine a systematic task analysis across 6 ICUs with an international Delphi survey involving 19 health data scientists from the industry and academia and 61 ICU clinicians (25 physicians and 36 nurses) to define and assess optimal levels of human-AI teaming (level 1=no performance benefits; level 2=AI augments human performance; level 3=humans augment AI performance; level 4=AI performs without human input). Stakeholder groups also considered ethical and social implications.
    RESULTS: Both stakeholder groups chose level 2 and 3 human-AI teaming for 4 out of 6 core tasks in the ICU. For one task (monitoring), level 4 was the preferred design choice. For the task of patient interactions, both data scientists and clinicians agreed that AI should not be used regardless of technological feasibility due to the importance of the physician-patient and nurse-patient relationship and ethical concerns. Human-AI design choices rely on interpretability, predictability, and control over AI systems. If these conditions are not met and AI performs below human-level reliability, a reduction to level 1 or shifting accountability away from human end users is advised. If AI performs at or beyond human-level reliability and these conditions are not met, shifting to level 4 automation should be considered to ensure safe and efficient human-AI teaming.
    CONCLUSIONS: By considering the sociotechnical system and determining appropriate levels of human-AI teaming, our study showcases the potential for improving the safety and effectiveness of AI usage in ICUs and broader health care settings. Regulatory measures should prioritize interpretability, predictability, and control if clinicians hold full accountability. Ethical and social implications must be carefully evaluated to ensure effective collaboration between humans and AI, particularly considering the most recent advancements in generative AI.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:医疗机构的领导者正在努力应对不断上涨的费用和对医疗服务的需求激增。作为回应,他们越来越多地采用人工智能(AI)技术来改善患者护理流程,减轻业务负担,并有效提高医疗质量。
    目标:在本文中,我们将回顾现有文献,并综合有关领导力在推动医疗保健领域AI转型中的作用的见解。
    方法:我们对多个数据库进行了全面的搜索,包括MEDLINE(通过Ovid),PsycINFO(通过Ovid),CINAHL(通过EBSCO),BusinessSourcePremier(通过EBSCO),和加拿大商业与时事(通过ProQuest),2015年至2023年6月发表的关于医疗保健行业人工智能转型的文章。具体来说,我们专注于实证研究,特别强调领导力。我们用了一个归纳法,定性绘制证据的专题分析方法。根据PRISMA-ScR(系统评论的首选报告项目和范围评论的荟萃分析扩展)报告了研究结果。
    结果:对2,813篇独特摘要的回顾导致检索了97篇全文文章以供评估,其中我们包括22篇文章供审查。我们对文献的映射表明,医疗保健行业内领先的AI转型涉及导航受复杂的各种监管影响的不断变化的格局。技术和组织环境。技术,战略,操作,需要组织领导才能推动AI转型。跨技术领域的领导力,适应性,和人际交往能力对于成功驾驭这一转变至关重要。
    结论:结论:这篇综述提供了对领导力职能领域的见解,必要的领导能力,以及塑造与人工智能转型相关的领导行为的环境因素。未来对医疗保健中人工智能的研究应该将领导力作为一个关键因素进行调查,并检查功能领域的相互关联性。领导能力和背景,通过严格的研究方法,以加强现有的证据基础。
    背景:
    BACKGROUND: The leaders of health care organizations are grappling with rising expenses and surging demands for health services. In response, they are increasingly embracing artificial intelligence (AI) technologies to improve patient care delivery, alleviate operational burdens, and efficiently improve health care safety and quality.
    OBJECTIVE: In this paper, we map the current literature and synthesize insights on the role of leadership in driving AI transformation within health care organizations.
    METHODS: We conducted a comprehensive search across several databases, including MEDLINE (via Ovid), PsycINFO (via Ovid), CINAHL (via EBSCO), Business Source Premier (via EBSCO), and Canadian Business & Current Affairs (via ProQuest), spanning articles published from 2015 to June 2023 discussing AI transformation within the health care sector. Specifically, we focused on empirical studies with a particular emphasis on leadership. We used an inductive, thematic analysis approach to qualitatively map the evidence. The findings were reported in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines.
    RESULTS: A comprehensive review of 2813 unique abstracts led to the retrieval of 97 full-text articles, with 22 included for detailed assessment. Our literature mapping reveals that successful AI integration within healthcare organizations requires leadership engagement across technological, strategic, operational, and organizational domains. Leaders must demonstrate a blend of technical expertise, adaptive strategies, and strong interpersonal skills to navigate the dynamic healthcare landscape shaped by complex regulatory, technological, and organizational factors.
    CONCLUSIONS: In conclusion, leading AI transformation in healthcare requires a multidimensional approach, with leadership across technological, strategic, operational, and organizational domains. Organizations should implement a comprehensive leadership development strategy, including targeted training and cross-functional collaboration, to equip leaders with the skills needed for AI integration. Additionally, when upskilling or recruiting AI talent, priority should be given to individuals with a strong mix of technical expertise, adaptive capacity, and interpersonal acumen, enabling them to navigate the unique complexities of the healthcare environment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    成功的人工智能(AI)实施基于临床医生和患者的信任,通过负责任的使用文化来实现,注重法规,标准,和教育。耳鼻喉科医生可以通过专业协会促进数据标准化来克服人工智能实施中的障碍,参与整合人工智能的机构努力,并为学员和从业者开发耳鼻喉科特定的人工智能教育。
    Successful artificial intelligence (AI) implementation is predicated on the trust of clinicians and patients, and is achieved through a culture of responsible use, focusing on regulations, standards, and education. Otolaryngologists can overcome barriers in AI implementation by promoting data standardization through professional societies, engaging in institutional efforts to integrate AI, and developing otolaryngology-specific AI education for both trainees and practitioners.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:尽管人工智能在医疗保健领域的研究取得了实质性进展,在临床环境中将研究成果转化为人工智能系统是具有挑战性的,在许多情况下,不满意。因此,许多人工智能投资停滞在原型层面,从未到达临床环境。
    目标:为了提高未来人工智能实施项目成功的机会,我们分析了临床AI系统实施者的经验,以更好地理解其实施中的挑战和成功因素.
    方法:采访了来自欧洲、北美和南美国家的37名临床AI实施者。采用框架法对半结构化访谈进行了转录和定性分析,确定成功因素和挑战的原因,并记录实施者的建议,以改善临床环境中的AI采用。
    结果:我们收集了实施者的要求,以促进在临床环境中采用AI。主要发现包括1)AI可解释性的重要性较低,有利于适当的临床验证研究,2)需要临床从业人员积极参与,不仅仅是临床研究人员,在人工智能研究项目的开始,3)需要更好的信息结构和流程来管理数据访问和人工智能项目的道德批准,4)需要更好地支持法规遵从性和避免数据管理批准机构中的重复,5)尊重人工智能的好处和局限性,需要提高临床医生和公民的素养,6)需要更好的资金计划来支持实施,嵌入,并在临床工作流程中验证AI,超越飞行员
    结论:访谈参与者对AI在临床环境中的未来持积极态度。同时,他们提出了一系列措施,将研究进展转化为有利于医疗保健人员的实施。将人工智能研究转化为医护人员和患者的福利需要调整法规,数据访问程序,教育,资助计划,和人工智能系统的验证。
    BACKGROUND: Despite substantial progress in AI research for healthcare, translating research achievements to AI systems in clinical settings is challenging and, in many cases, unsatisfactory. As a result, many AI investments have stalled at the prototype level, never reaching clinical settings.
    OBJECTIVE: To improve the chances of future AI implementation projects succeeding, we analyzed the experiences of clinical AI system implementers to better understand the challenges and success factors in their implementations.
    METHODS: Thirty-seven implementers of clinical AI from European and North and South American countries were interviewed. Semi-structured interviews were transcribed and analyzed qualitatively with the framework method, identifying the success factors and the reasons for challenges as well as documenting proposals from implementers to improve AI adoption in clinical settings.
    RESULTS: We gathered the implementers\' requirements for facilitating AI adoption in the clinical setting. The main findings include 1) the lesser importance of AI explainability in favor of proper clinical validation studies, 2) the need to actively involve clinical practitioners, and not only clinical researchers, in the inception of AI research projects, 3) the need for better information structures and processes to manage data access and the ethical approval of AI projects, 4) the need for better support for regulatory compliance and avoidance of duplications in data management approval bodies, 5) the need to increase both clinicians\' and citizens\' literacy as respects the benefits and limitations of AI, and 6) the need for better funding schemes to support the implementation, embedding, and validation of AI in the clinical workflow, beyond pilots.
    CONCLUSIONS: Participants in the interviews are positive about the future of AI in clinical settings. At the same time, they proposenumerous measures to transfer research advancesinto implementations that will benefit healthcare personnel. Transferring AI research into benefits for healthcare workers and patients requires adjustments in regulations, data access procedures, education, funding schemes, and validation of AI systems.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    尽管针对医疗保健应用的AI开发激增,特别是医学成像应用,这种人工智能工具在临床实践中的应用有限。在11月的为期一天的研讨会上,2022年,由美国放射学会(ACR)和北美放射学会(RSNA)共同组织,参与者概述了在临床实践中实施人工智能的经验和问题,定义了人工智能生态系统中各种利益相关者的需求,并提出了与安全相关的潜在解决方案和策略,有效性,可靠性,和AI算法的透明度。参与者包括来自学术和社区放射学实践的放射科医生,负责人工智能实施的信息学领导者,监管机构员工,和专业社会代表。出现的主要主题分为两类:1)AI产品开发和2)在临床实践中实施基于AI的应用程序。特别是,参与者强调了人工智能产品开发的关键方面,包括明确的临床任务定义;来自不同地域的精心策划的数据,经济,和医疗保健环境;监控模型可靠性的标准和机制;以及关于模型性能的透明度,在控制和现实世界的设置。对于实施,与会者强调需要强有力的机构治理;系统评估,选择,和验证方法由当地团队进行;无缝整合到临床工作流程;绩效监测和支持的地方团队;绩效监测由外部实体;和一致的激励措施,通过认证和报销。参与者预测,在放射学中AI的临床实施将继续受到限制,直到安全性,有效性,可靠性,这些工具的透明度得到了更充分的解决。
    Despite the surge in artificial intelligence (AI) development for health care applications, particularly for medical imaging applications, there has been limited adoption of such AI tools into clinical practice. During a 1-day workshop in November 2022, co-organized by the ACR and the RSNA, participants outlined experiences and problems with implementing AI in clinical practice, defined the needs of various stakeholders in the AI ecosystem, and elicited potential solutions and strategies related to the safety, effectiveness, reliability, and transparency of AI algorithms. Participants included radiologists from academic and community radiology practices, informatics leaders responsible for AI implementation, regulatory agency employees, and specialty society representatives. The major themes that emerged fell into two categories: (1) AI product development and (2) implementation of AI-based applications in clinical practice. In particular, participants highlighted key aspects of AI product development to include clear clinical task definitions; well-curated data from diverse geographic, economic, and health care settings; standards and mechanisms to monitor model reliability; and transparency regarding model performance, both in controlled and real-world settings. For implementation, participants emphasized the need for strong institutional governance; systematic evaluation, selection, and validation methods conducted by local teams; seamless integration into the clinical workflow; performance monitoring and support by local teams; performance monitoring by external entities; and alignment of incentives through credentialing and reimbursement. Participants predicted that clinical implementation of AI in radiology will continue to be limited until the safety, effectiveness, reliability, and transparency of such tools are more fully addressed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Editorial
    医疗保健领域正处于重大技术飞跃的尖端,得益于人工智能(AI)语言模型的进步,但是确保道德设计,部署,使用这些技术对于真正实现其在改善医疗保健服务和促进人类福祉和安全方面的潜力至关重要。的确,这些模型在生成类似人类的文本方面表现出了非凡的能力,越来越多的研究和实际应用证明了这一点。这种能力为增强患者参与度铺平了道路,临床决策支持,以及曾经被认为遥不可及的其他应用程序。然而,从潜在应用到现实世界应用的旅程充满了挑战,从确保可靠性和透明度到驾驭复杂的监管环境。仍然需要进行全面评估和严格验证,以确保这些模型是可靠的,透明,和道德上的声音。这篇社论介绍了新的部分,题为“医疗保健中的人工智能语言模型”。“本节旨在为学者创造一个平台,从业者,和创新者分享他们的见解,研究结果,以及AI语言模型在医疗保健中的实际应用。目的是培养一个社区,不仅对可能性感到兴奋,而且对道德,实用,以及未来的监管挑战。
    The realm of health care is on the cusp of a significant technological leap, courtesy of the advancements in artificial intelligence (AI) language models, but ensuring the ethical design, deployment, and use of these technologies is imperative to truly realize their potential in improving health care delivery and promoting human well-being and safety. Indeed, these models have demonstrated remarkable prowess in generating humanlike text, evidenced by a growing body of research and real-world applications. This capability paves the way for enhanced patient engagement, clinical decision support, and a plethora of other applications that were once considered beyond reach. However, the journey from potential to real-world application is laden with challenges ranging from ensuring reliability and transparency to navigating a complex regulatory landscape. There is still a need for comprehensive evaluation and rigorous validation to ensure that these models are reliable, transparent, and ethically sound. This editorial introduces the new section, titled \"AI Language Models in Health Care.\" This section seeks to create a platform for academics, practitioners, and innovators to share their insights, research findings, and real-world applications of AI language models in health care. The aim is to foster a community that is not only excited about the possibilities but also critically engaged with the ethical, practical, and regulatory challenges that lie ahead.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Systematic Review
    目的:确定影响机器学习算法(MLA)实施的因素,这些因素可预测住院成年患者的临床恶化,并将这些因素与经过验证的实施框架联系起来。
    方法:对实施或试验的实时临床恶化预测MLA的研究进行了系统综述,其中确定:如何测量MLA实施;MLA对临床过程和患者结果的影响;以及障碍,实施过程中的推动者和不确定性。然后将审查结果映射到SALIENT端到端实施框架,以确定应用这些因素的实施阶段。
    结果:确定了与14组MLA有关的37篇文章,每个试验或实现一个定制的算法。确定了107个不同的实施评估指标。四组报告住院死亡率下降,1显著。我们确定了24个障碍,40个推动者,和14个不确定性,并将这些映射到SALIENT实施框架的5个阶段。
    结论:在计算机模拟和试验阶段之间,算法在实施阶段的性能下降。沉默加试点试验纳入与死亡率降低相关,使用少于39个变量的逻辑回归算法也是如此.通过警报抑制和阈值配置减轻警报疲劳通常在各组中采用。
    结论::有证据表明,实际实施临床恶化预测MLA可以改善临床结局。确定为影响实施成功或失败的各种因素可以映射到不同的实施阶段,从而为实施者提供有用和实用的指导。
    To identify factors influencing implementation of machine learning algorithms (MLAs) that predict clinical deterioration in hospitalized adult patients and relate these to a validated implementation framework.
    A systematic review of studies of implemented or trialed real-time clinical deterioration prediction MLAs was undertaken, which identified: how MLA implementation was measured; impact of MLAs on clinical processes and patient outcomes; and barriers, enablers and uncertainties within the implementation process. Review findings were then mapped to the SALIENT end-to-end implementation framework to identify the implementation stages at which these factors applied.
    Thirty-seven articles relating to 14 groups of MLAs were identified, each trialing or implementing a bespoke algorithm. One hundred and seven distinct implementation evaluation metrics were identified. Four groups reported decreased hospital mortality, 1 significantly. We identified 24 barriers, 40 enablers, and 14 uncertainties and mapped these to the 5 stages of the SALIENT implementation framework.
    Algorithm performance across implementation stages decreased between in silico and trial stages. Silent plus pilot trial inclusion was associated with decreased mortality, as was the use of logistic regression algorithms that used less than 39 variables. Mitigation of alert fatigue via alert suppression and threshold configuration was commonly employed across groups.
    : There is evidence that real-world implementation of clinical deterioration prediction MLAs may improve clinical outcomes. Various factors identified as influencing success or failure of implementation can be mapped to different stages of implementation, thereby providing useful and practical guidance for implementers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Review
    目标:根据现有的AI框架,并与临床AI研究的报告标准相结合,为医院内的临床AI模型制定一个全面的实施框架。
    方法:(1)根据Stead等人的分类法得出一个临时实施框架,并与当前AI研究的报告标准集成:TRIPOD,决定AI,CONSORT-AI.(2)对已发布的临床AI实施框架进行范围审查,并确定关键主题和阶段。(3)执行差距分析,并通过纳入缺失的项目来完善框架。
    结果:临时AI实施框架,叫做Salient,被映射到分类法和报告标准共有的5个阶段。范围审查检索了20项研究和247个主题,阶段,并确定了子元素。差距分析确定了5个新的跨阶段主题和16个新任务。最终框架包括5个阶段,7个元素,和4个组件,包括人工智能系统,数据管道,人机界面,和临床工作流程。
    结论:这个务实的框架解决了现有的基于阶段和主题的临床AI实施指南中的差距,全面解决了什么(组件),当(阶段),以及人工智能实施的方式(任务),以及谁(组织)和为什么(策略域)。通过将研究报告标准整合到SALIENT中,该框架以严格的评估方法为基础。该框架需要验证是否适用于部署的AI模型的现实世界研究。
    结论:已经开发了一种新颖的端到端框架,用于在医院临床实践中实施AI,该框架建立在以前的AI实施框架和研究报告标准的基础上。
    To derive a comprehensive implementation framework for clinical AI models within hospitals informed by existing AI frameworks and integrated with reporting standards for clinical AI research.
    (1) Derive a provisional implementation framework based on the taxonomy of Stead et al and integrated with current reporting standards for AI research: TRIPOD, DECIDE-AI, CONSORT-AI. (2) Undertake a scoping review of published clinical AI implementation frameworks and identify key themes and stages. (3) Perform a gap analysis and refine the framework by incorporating missing items.
    The provisional AI implementation framework, called SALIENT, was mapped to 5 stages common to both the taxonomy and the reporting standards. A scoping review retrieved 20 studies and 247 themes, stages, and subelements were identified. A gap analysis identified 5 new cross-stage themes and 16 new tasks. The final framework comprised 5 stages, 7 elements, and 4 components, including the AI system, data pipeline, human-computer interface, and clinical workflow.
    This pragmatic framework resolves gaps in existing stage- and theme-based clinical AI implementation guidance by comprehensively addressing the what (components), when (stages), and how (tasks) of AI implementation, as well as the who (organization) and why (policy domains). By integrating research reporting standards into SALIENT, the framework is grounded in rigorous evaluation methodologies. The framework requires validation as being applicable to real-world studies of deployed AI models.
    A novel end-to-end framework has been developed for implementing AI within hospital clinical practice that builds on previous AI implementation frameworks and research reporting standards.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Systematic Review
    目的:使用系统方法检索和评估部署的基于人工智能(AI)的败血症预测算法的研究,确定实施障碍,启用者,和关键决策,然后将这些映射到一个新的端到端临床人工智能实施框架。
    方法:系统地回顾关于方法学质量的临床应用的基于AI的脓毒症预测算法的研究,部署和评估方法,和结果。确定影响实施的上下文因素,并将这些因素映射到SALIENT实施框架。
    结果:评论确定了30篇应用于成人医院环境的算法,5项研究报告实施后死亡率显著降低.确定了八组算法,每个共享一个共同的算法。我们确定了14个障碍,26个推动者,和22个决策点,这些决策点能够映射到SALIENT实施框架的5个阶段。
    结论:应用脓毒症预测算法的实证研究证明了其在改善护理和降低死亡率方面的潜力,但揭示了现有实施指导中的持续差距。在审查过的出版物中,反映实际实施经验的关键决策点可以映射到SALIENT框架,因为这些决策点似乎与人工智能任务无关,该框架也可能适用于非脓毒症算法。地图澄清了何时何地的障碍,启用者,关键决策出现在端到端人工智能实施过程中。
    结论:对脓毒症预测算法的实际实施研究进行了系统综述,以验证端到端分阶段实施框架,该框架能够说明在确保成功部署时需要注意的关键因素。并扩展了以前的AI实现框架。
    To retrieve and appraise studies of deployed artificial intelligence (AI)-based sepsis prediction algorithms using systematic methods, identify implementation barriers, enablers, and key decisions and then map these to a novel end-to-end clinical AI implementation framework.
    Systematically review studies of clinically applied AI-based sepsis prediction algorithms in regard to methodological quality, deployment and evaluation methods, and outcomes. Identify contextual factors that influence implementation and map these factors to the SALIENT implementation framework.
    The review identified 30 articles of algorithms applied in adult hospital settings, with 5 studies reporting significantly decreased mortality post-implementation. Eight groups of algorithms were identified, each sharing a common algorithm. We identified 14 barriers, 26 enablers, and 22 decision points which were able to be mapped to the 5 stages of the SALIENT implementation framework.
    Empirical studies of deployed sepsis prediction algorithms demonstrate their potential for improving care and reducing mortality but reveal persisting gaps in existing implementation guidance. In the examined publications, key decision points reflecting real-word implementation experience could be mapped to the SALIENT framework and, as these decision points appear to be AI-task agnostic, this framework may also be applicable to non-sepsis algorithms. The mapping clarified where and when barriers, enablers, and key decisions arise within the end-to-end AI implementation process.
    A systematic review of real-world implementation studies of sepsis prediction algorithms was used to validate an end-to-end staged implementation framework that has the ability to account for key factors that warrant attention in ensuring successful deployment, and which extends on previous AI implementation frameworks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    基于人工智能的算法正在医疗保健中广泛实施,即使有证据表明他们的设计存在偏见,执行问题,以及对患者的潜在危害。为了实现使用基于AI的工具改善健康的承诺,医疗机构需要具备人工智能能力,内部和外部系统协同工作以确保安全,伦理,以及有效使用基于AI的工具。关于组织惯例的想法开始出现,能力,资源,以及在医疗保健中安全有效地部署人工智能所需的基础设施,但是几乎没有实证研究。为管理人员提供法律和监管指导的基础设施,临床医生安全有效地使用基于人工智能的工具的能力,以学习者为中心的资源,如清晰的人工智能文档和当地健康生态系统影响审查,可以帮助推动持续改进。
    Artificial intelligence-based algorithms are being widely implemented in health care, even as evidence is emerging of bias in their design, problems with implementation, and potential harm to patients. To achieve the promise of using of AI-based tools to improve health, healthcare organizations will need to be AI-capable, with internal and external systems functioning in tandem to ensure the safe, ethical, and effective use of AI-based tools. Ideas are starting to emerge about the organizational routines, competencies, resources, and infrastructures that will be required for safe and effective deployment of AI in health care, but there has been little empirical research. Infrastructures that provide legal and regulatory guidance for managers, clinician competencies for the safe and effective use of AI-based tools, and learner-centric resources such as clear AI documentation and local health ecosystem impact reviews can help drive continuous improvement.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号