关键词: artificial intelligence clinical end-of-life care ethics machine learning palliative care

Mesh : Humans Workflow Advance Care Planning Ethnicity Qualitative Research

来  源:   DOI:10.1093/jamia/ocad022   PDF(Pubmed)

Abstract:
Identifying ethical concerns with ML applications to healthcare (ML-HCA) before problems arise is now a stated goal of ML design oversight groups and regulatory agencies. Lack of accepted standard methodology for ethical analysis, however, presents challenges. In this case study, we evaluate use of a stakeholder \"values-collision\" approach to identify consequential ethical challenges associated with an ML-HCA for advanced care planning (ACP). Identification of ethical challenges could guide revision and improvement of the ML-HCA.
We conducted semistructured interviews of the designers, clinician-users, affiliated administrators, and patients, and inductive qualitative analysis of transcribed interviews using modified grounded theory.
Seventeen stakeholders were interviewed. Five \"values-collisions\"-where stakeholders disagreed about decisions with ethical implications-were identified: (1) end-of-life workflow and how model output is introduced; (2) which stakeholders receive predictions; (3) benefit-harm trade-offs; (4) whether the ML design team has a fiduciary relationship to patients and clinicians; and, (5) how and if to protect early deployment research from external pressures, like news scrutiny, before research is completed.
From these findings, the ML design team prioritized: (1) alternative workflow implementation strategies; (2) clarification that prediction was only evaluated for ACP need, not other mortality-related ends; and (3) shielding research from scrutiny until endpoint driven studies were completed.
In this case study, our ethical analysis of this ML-HCA for ACP was able to identify multiple sites of intrastakeholder disagreement that mark areas of ethical and value tension. These findings provided a useful initial ethical screening.
摘要:
目标:在出现问题之前确定ML应用于医疗保健(ML-HCA)的伦理问题现在是ML设计监督小组和监管机构的既定目标。缺乏公认的道德分析标准方法,然而,提出挑战。在这个案例研究中,我们评估使用利益相关者"价值观碰撞"方法来识别与用于高级护理计划(ACP)的ML-HCA相关的相应道德挑战.确定道德挑战可以指导ML-HCA的修订和改进。
方法:我们对设计师进行了半结构化访谈,临床医生用户,附属管理员,和病人,并运用修正扎根理论对转录访谈进行归纳定性分析。
结果:采访了17名利益相关者。确定了五个“价值冲突”-利益相关者对具有道德影响的决策不同意:(1)生命周期结束的工作流程以及如何引入模型输出;(2)利益相关者收到预测;(3)利弊权衡;(4)ML设计团队与患者和临床医生是否具有信托关系;以及,(5)如何以及是否保护早期部署研究免受外部压力,比如新闻审查,在研究完成之前。
结论:根据这些发现,ML设计团队优先考虑:(1)替代工作流程实施策略;(2)澄清预测仅针对ACP需求进行评估,没有其他与死亡率相关的结局;(3)在终点驱动研究完成之前,保护研究免受审查。
结论:在本案例研究中,我们对ACP的ML-HCA进行的伦理分析能够确定利益相关者内部分歧的多个部位,这些部位标志着伦理和价值紧张的领域.这些发现提供了有用的初步伦理筛选。
公众号