人工智能(AI)在医疗保健中的日益整合提出了许多道德标准,legal,社会,以及涉及各种利益相关者的政治挑战。这些挑战促使各种研究提出框架和指导方针来解决这些问题,强调人工智能发展的不同阶段,部署,和监督。因此,负责任的人工智能的概念已经变得普遍,纳入透明度等道德原则,公平,责任,和隐私。本文探讨了现有的人工智能在医疗保健中的使用文献,以研究它是如何解决的,定义,并讨论了责任的概念。我们对与医疗保健中的人工智能责任相关的文献进行了范围审查,在2017年1月至2022年1月之间的数据库和参考列表中搜索与“责任”和“医疗保健中的AI”相关的术语,以及它们的衍生物。筛选后,共有136篇文章。数据分为四个主题类别:(1)用于描述和解决责任的各种术语;(2)与责任相关的原则和概念;(3)AI临床开发中的利益相关者责任,使用,和部署;以及(4)解决责任问题的建议。结果显示缺乏对医疗保健中AI责任的明确定义,并强调了确保在医疗保健中负责任地开发和实施AI的重要性。需要进一步的研究来澄清这一概念,以有助于制定有关责任类型的框架(道德/道德/专业,legal,和因果关系)涉及AI生命周期的各种利益相关者。
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI
responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to \"
responsibility\" and \"AI in healthcare\", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with
responsibility; (3) stakeholders\' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing
responsibility concerns. The results show the lack of a clear definition of AI
responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.