关键词: artificial intelligence case-based reasoning causal inference clinical decision support drug safety machine learning natural language processing

Mesh : United States United States Food and Drug Administration Artificial Intelligence Humans Drug-Related Side Effects and Adverse Reactions Clinical Decision-Making Product Surveillance, Postmarketing / methods Adverse Drug Reaction Reporting Systems Algorithms Trust

来  源:   DOI:10.2196/50274   PDF(Pubmed)

Abstract:
Adverse drug reactions are a common cause of morbidity in health care. The US Food and Drug Administration (FDA) evaluates individual case safety reports of adverse events (AEs) after submission to the FDA Adverse Event Reporting System as part of its surveillance activities. Over the past decade, the FDA has explored the application of artificial intelligence (AI) to evaluate these reports to improve the efficiency and scientific rigor of the process. However, a gap remains between AI algorithm development and deployment. This viewpoint aims to describe the lessons learned from our experience and research needed to address both general issues in case-based reasoning using AI and specific needs for individual case safety report assessment. Beginning with the recognition that the trustworthiness of the AI algorithm is the main determinant of its acceptance by human experts, we apply the Diffusion of Innovations theory to help explain why certain algorithms for evaluating AEs at the FDA were accepted by safety reviewers and others were not. This analysis reveals that the process by which clinicians decide from case reports whether a drug is likely to cause an AE is not well defined beyond general principles. This makes the development of high performing, transparent, and explainable AI algorithms challenging, leading to a lack of trust by the safety reviewers. Even accounting for the introduction of large language models, the pharmacovigilance community needs an improved understanding of causal inference and of the cognitive framework for determining the causal relationship between a drug and an AE. We describe specific future research directions that underpin facilitating implementation and trust in AI for drug safety applications, including improved methods for measuring and controlling of algorithmic uncertainty, computational reproducibility, and clear articulation of a cognitive framework for causal inference in case-based reasoning.
摘要:
药物不良反应是医疗保健中常见的发病原因。美国食品和药物管理局(FDA)在提交给FDA不良事件报告系统后,评估不良事件(AE)的个例安全性报告,作为其监测活动的一部分。在过去的十年里,FDA已经探索了人工智能(AI)的应用来评估这些报告,以提高该过程的效率和科学严谨性。然而,人工智能算法开发和部署之间仍然存在差距。此观点旨在描述从我们的经验和研究所吸取的教训,以解决使用AI进行基于案例的推理中的一般问题以及对个别案例安全报告评估的特定需求。首先认识到人工智能算法的可信性是人类专家接受它的主要决定因素,我们应用创新扩散理论来帮助解释为什么在FDA评估AE的某些算法被安全性审评员接受而其他算法不被接受.该分析表明,临床医生从病例报告中确定药物是否可能引起AE的过程并没有超出一般原则。这使得高性能的发展,透明,和可解释的人工智能算法具有挑战性,导致安全审查人员缺乏信任。即使考虑到大型语言模型的引入,药物警戒界需要更好地理解因果推断以及确定药物与AE之间因果关系的认知框架.我们描述了具体的未来研究方向,这些方向支持促进人工智能在药物安全应用中的实施和信任,包括改进的算法不确定性测量和控制的方法,计算再现性,并清晰地阐明了基于案例的推理中因果推理的认知框架。
公众号