Explainability

可解释性
  • 文章类型: Journal Article
    建立投资组合是许多研究人员多年来一直在解决的问题。一直以来的关键目标是通过优化配置股票等资产来平衡风险和回报,债券,和现金。总的来说,投资组合管理过程基于三个步骤:计划,执行,和反馈,每个都有其目标和要采用的方法。从马科维茨的均值-方差投资组合理论出发,不同的框架已被广泛接受,这大大更新了资产配置的解决方式。人工智能的最新进展提供了解决高度复杂问题的方法和技术能力,投资组合也不例外。出于这个原因,本文通过回答人工智能如何改变投资组合管理步骤的核心问题,回顾了当前最先进的方法。此外,随着人工智能在金融中的使用受到透明度的挑战,公平和可解释性要求,论证了资产配置事后解释的案例研究。最后,我们讨论了欧洲投资业务最近的监管发展,并强调了该业务的特定方面,在这些方面,可解释的人工智能可以提高投资过程的透明度。
    Building an investment portfolio is a problem that numerous researchers have addressed for many years. The key goal has always been to balance risk and reward by optimally allocating assets such as stocks, bonds, and cash. In general, the portfolio management process is based on three steps: planning, execution, and feedback, each of which has its objectives and methods to be employed. Starting from Markowitz\'s mean-variance portfolio theory, different frameworks have been widely accepted, which considerably renewed how asset allocation is being solved. Recent advances in artificial intelligence provide methodological and technological capabilities to solve highly complex problems, and investment portfolio is no exception. For this reason, the paper reviews the current state-of-the-art approaches by answering the core question of how artificial intelligence is transforming portfolio management steps. Moreover, as the use of artificial intelligence in finance is challenged by transparency, fairness and explainability requirements, the case study of post-hoc explanations for asset allocation is demonstrated. Finally, we discuss recent regulatory developments in the European investment business and highlight specific aspects of this business where explainable artificial intelligence could advance transparency of the investment process.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:回顾可用于医学成像/(MI)的可解释的人工智能/(XAI)方法。
    方法:根据JoannaBriggs研究所的方法进行了范围审查。搜索是在Pubmed上进行的,Embase,Cinhal,WebofScience,BioRxiv,MedRxiv,谷歌学者。包括2017年以后以法语和英语发表的研究。与可解释性相关的关键词组合和描述符,并采用MI模式。两名独立审稿人筛选了摘要,标题和全文,通过讨论解决分歧。
    结果:228项研究符合标准。XAI出版物正在增加,靶向MRI(n=73),射线照相术(n=47),CT(n=46)。肺(n=82)和脑(n=74)病理,Covid-19(n=48),阿尔茨海默病(n=25),脑肿瘤(n=15)是主要的病理解释。解释是直观的(n=186),数值(n=67),基于规则(n=11),从文本上讲(n=11),基于示例(n=6)。通常解释的任务包括分类(n=89),预测(n=47),诊断(n=39),检测(n=29),分段(n=13),和图像质量改善(n=6)。最常提供的解释是本地(78.1%),5.7%是全球性的,16.2%结合本地和全球方法。主要采用事后方法。使用的术语各不相同,有时不明确地使用可解释的(n=207),可解释(n=187),可理解(n=112),透明(n=61),可靠(n=31),并且可理解(n=3)。
    结论:XAI医学影像出版物的数量正在增加,主要专注于将XAI技术应用于MRI,CT,以及用于分类和预测肺部和脑部病理的射线照相术。主要使用视觉和数字输出格式。术语标准化仍然是一个挑战,因为像“可解释”和“可解释”这样的术语有时被无差别地使用。未来的XAI开发应考虑用户需求和观点。
    OBJECTIVE: To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI).
    METHODS: A scoping review was conducted following the Joanna Briggs Institute\'s methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion.
    RESULTS: 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer\'s disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3).
    CONCLUSIONS: The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like \"explainable\" and \"interpretable\" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:尽管人工智能(AI)和机器学习(ML)具有彻底改变医疗保健的潜力,临床决策支持工具,本文称为医学建模软件(MMS),尚未实现预期的好处。一个提出的障碍是人工智能翻译中公认的差距。这些差距部分源于支持MMS透明文档的流程和资源的分散。因此,缺乏透明的报告阻碍了提供证据来支持MMS在临床实践中的实施,从而成为软件从研究环境到临床实践的成功翻译的实质性障碍。
    目的:本研究旨在对基于AI和ML的MMS文档实践的现状进行分析,并阐明文档在促进将伦理和可解释的MMS转化为临床工作流程方面的功能。
    方法:根据PRISMA-ScR(系统审查的首选报告项目和范围审查的Meta分析扩展)指南进行范围审查。PubMed使用医学主题词AI的关键概念进行搜索,ML,伦理考虑,和可解释性,以识别详细说明基于AI和ML的MMS文档的出版物,除了雪球采样选定的参考列表。要包括未明确标记为隐式文档实践的可能性,我们没有使用文档作为关键概念,而是作为纳入标准。由1位作者进行了2阶段筛选过程(标题和摘要筛选以及全文审查)。数据提取模板用于记录与出版物相关的信息;开发道德和可解释的MMS的障碍;可用标准,法规,框架,或与文档相关的治理策略;以及符合纳入标准的论文的文档建议。
    结果:在检索到的115篇论文中,21篇(18.3%)论文符合纳入要求。在基于AI和ML的MMS文档和翻译的背景下研究了道德和可解释性。综合了详细说明当前状态和挑战的数据以及对未来研究的建议。定义当前状态和需要彻底审查的挑战的值得注意的主题包括偏见,问责制,治理,和可解释性。文献中确定的解决当前障碍的建议要求对MMS进行积极评估,多学科合作,遵守调查和验证协议,透明度和可追溯性要求,以及指导标准和框架,以增强文档工作并支持基于AI和ML的MMS的翻译。
    结论:解决翻译障碍对于MMS实现期望至关重要,包括在这次范围界定审查中发现的与偏见有关的障碍,问责制,治理,和可解释性。我们的研究结果表明,透明的战略文件,调整翻译科学和监管科学,将通过协调沟通和报告以及减少翻译障碍来支持彩信的翻译,从而进一步采用彩信。
    BACKGROUND: Despite the touted potential of artificial intelligence (AI) and machine learning (ML) to revolutionize health care, clinical decision support tools, herein referred to as medical modeling software (MMS), have yet to realize the anticipated benefits. One proposed obstacle is the acknowledged gaps in AI translation. These gaps stem partly from the fragmentation of processes and resources to support MMS transparent documentation. Consequently, the absence of transparent reporting hinders the provision of evidence to support the implementation of MMS in clinical practice, thereby serving as a substantial barrier to the successful translation of software from research settings to clinical practice.
    OBJECTIVE: This study aimed to scope the current landscape of AI- and ML-based MMS documentation practices and elucidate the function of documentation in facilitating the translation of ethical and explainable MMS into clinical workflows.
    METHODS: A scoping review was conducted in accordance with PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. PubMed was searched using Medical Subject Headings key concepts of AI, ML, ethical considerations, and explainability to identify publications detailing AI- and ML-based MMS documentation, in addition to snowball sampling of selected reference lists. To include the possibility of implicit documentation practices not explicitly labeled as such, we did not use documentation as a key concept but as an inclusion criterion. A 2-stage screening process (title and abstract screening and full-text review) was conducted by 1 author. A data extraction template was used to record publication-related information; barriers to developing ethical and explainable MMS; available standards, regulations, frameworks, or governance strategies related to documentation; and recommendations for documentation for papers that met the inclusion criteria.
    RESULTS: Of the 115 papers retrieved, 21 (18.3%) papers met the requirements for inclusion. Ethics and explainability were investigated in the context of AI- and ML-based MMS documentation and translation. Data detailing the current state and challenges and recommendations for future studies were synthesized. Notable themes defining the current state and challenges that required thorough review included bias, accountability, governance, and explainability. Recommendations identified in the literature to address present barriers call for a proactive evaluation of MMS, multidisciplinary collaboration, adherence to investigation and validation protocols, transparency and traceability requirements, and guiding standards and frameworks that enhance documentation efforts and support the translation of AI- and ML-based MMS.
    CONCLUSIONS: Resolving barriers to translation is critical for MMS to deliver on expectations, including those barriers identified in this scoping review related to bias, accountability, governance, and explainability. Our findings suggest that transparent strategic documentation, aligning translational science and regulatory science, will support the translation of MMS by coordinating communication and reporting and reducing translational barriers, thereby furthering the adoption of MMS.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    部分信息分解(PID)是信息论中的工作主体,它允许人们量化几个随机变量提供的关于另一个随机变量的信息,无论是单独的(独特的信息),冗余(共享信息),或仅联合(协同信息)。这篇综述文章旨在对部分信息分解在算法公平性和可解释性方面的一些最新和新兴应用进行综述,鉴于机器学习在高风险应用中的使用越来越多,这一点非常重要。例如,PID,结合因果关系,实现了非豁免差距的解开,这是整体差距的一部分,不是由于关键的工作需要。同样,在联合学习中,PID实现了局部和全局差异之间权衡的量化。我们介绍了一种分类法,该分类法突出了PID在算法公平性和可解释性中的作用,主要有三个途径:(i)量化审计或培训的法律非豁免差异;(ii)解释各种特征或数据点的贡献;(iii)在联合学习中不同差异之间形成权衡。最后,我们还回顾了PID测量的估计技术,以及讨论一些挑战和未来的方向。
    Partial Information Decomposition (PID) is a body of work within information theory that allows one to quantify the information that several random variables provide about another random variable, either individually (unique information), redundantly (shared information), or only jointly (synergistic information). This review article aims to provide a survey of some recent and emerging applications of partial information decomposition in algorithmic fairness and explainability, which are of immense importance given the growing use of machine learning in high-stakes applications. For instance, PID, in conjunction with causality, has enabled the disentanglement of the non-exempt disparity which is the part of the overall disparity that is not due to critical job necessities. Similarly, in federated learning, PID has enabled the quantification of tradeoffs between local and global disparities. We introduce a taxonomy that highlights the role of PID in algorithmic fairness and explainability in three main avenues: (i) Quantifying the legally non-exempt disparity for auditing or training; (ii) Explaining contributions of various features or data points; and (iii) Formalizing tradeoffs among different disparities in federated learning. Lastly, we also review techniques for the estimation of PID measures, as well as discuss some challenges and future directions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    神经机器人是一类自主机器,在他们的建筑中,模仿人脑和认知的各个方面。因此,它们代表了人类基于人类对健康人类大脑的理解而创造的独特文物。欧盟《2025年机器人伦理公约》规定,所有机器人(包括神经机器人)的设计必须包括对机器人行为的完全可追溯性的规定。类似于飞机的飞行数据记录器。同时,人们可以预见神经机器人失败的案例正在上升,当他们在现实环境中对不完美的数据进行操作时,而这些神经机器人背后的AI还没有达到可解释性。本文回顾了神经机器人中使用的技术的轨迹以及伴随的故障。失败需要一个解释。在借鉴现有可解释的人工智能研究的同时,我们认为人工智能的可解释性限制了神经机器人的可解释性。为了让机器人更易于解释,我们提出了未来研究的潜在途径.
    Neuro-robots are a class of autonomous machines that, in their architecture, mimic aspects of the human brain and cognition. As such, they represent unique artifacts created by humans based on human understanding of healthy human brains. European Union\'s Convention on Roboethics 2025 states that the design of all robots (including neuro-robots) must include provisions for the complete traceability of the robots\' actions, analogous to an aircraft\'s flight data recorder. At the same time, one can anticipate rising instances of neuro-robotic failure, as they operate on imperfect data in real environments, and the underlying AI behind such neuro-robots has yet to achieve explainability. This paper reviews the trajectory of the technology used in neuro-robots and accompanying failures. The failures demand an explanation. While drawing on existing explainable AI research, we argue explainability in AI limits the same in neuro-robots. In order to make robots more explainable, we suggest potential pathways for future research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Systematic Review
    人工智能(AI)正在医学中应用,以改善医疗保健和促进健康公平。基于AI的技术在放射学中的应用有望通过提高准确性和简化个性化决策来提高诊断性能。虽然这项技术有可能改善卫生服务,需要仔细考虑许多道德和社会影响,以避免对个人和群体造成有害后果,尤其是最脆弱的人群。因此,提出了几个问题,包括(1)在医学和生物医学研究中使用人工智能会引发哪些类型的伦理问题,(2)这些问题是如何在放射学中解决的,特别是在乳腺癌的情况下?为了回答这些问题,对学术文献进行了系统的回顾。在五个电子数据库中进行了搜索,以识别自2017年以来发表的有关放射学AI伦理学的同行评审文章。综述结果表明,这篇论文主要解决了与医疗人工智能相关的期望和挑战,特别是偏见和黑匣子问题,并提出了各种指导原则来确保道德AI。我们发现,人工智能使用的一些伦理和社会影响仍未得到充分探索,需要更加重视解决潜在的歧视性影响和不公正现象。最后,我们从哲学和STS的角度对这些问题进行了批判性反思,并指出了话语中的差距,强调未来需要将社会科学观点纳入放射学的人工智能发展。
    Artificial intelligence (AI) is being applied in medicine to improve healthcare and advance health equity. The application of AI-based technologies in radiology is expected to improve diagnostic performance by increasing accuracy and simplifying personalized decision-making. While this technology has the potential to improve health services, many ethical and societal implications need to be carefully considered to avoid harmful consequences for individuals and groups, especially for the most vulnerable populations. Therefore, several questions are raised, including (1) what types of ethical issues are raised by the use of AI in medicine and biomedical research, and (2) how are these issues being tackled in radiology, especially in the case of breast cancer? To answer these questions, a systematic review of the academic literature was conducted. Searches were performed in five electronic databases to identify peer-reviewed articles published since 2017 on the topic of the ethics of AI in radiology. The review results show that the discourse has mainly addressed expectations and challenges associated with medical AI, and in particular bias and black box issues, and that various guiding principles have been suggested to ensure ethical AI. We found that several ethical and societal implications of AI use remain underexplored, and more attention needs to be paid to addressing potential discriminatory effects and injustices. We conclude with a critical reflection on these issues and the identified gaps in the discourse from a philosophical and STS perspective, underlining the need to integrate a social science perspective in AI developments in radiology in the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能(AI)已成为诊断和治疗决策的众多临床应用中的有用辅助手段。由于可用数据和计算能力的快速增长,深度神经网络在许多任务中显示出与临床医生相同或更好的性能。为了符合值得信赖的人工智能的原则,人工智能系统必须透明,健壮,公平,并确保问责制。由于缺乏对决策过程的细节的理解,当前的深层神经解决方案被称为黑箱。因此,在将深度神经网络纳入常规临床工作流程之前,需要确保它们的可解释性.在这篇叙述性评论中,我们利用系统的关键词搜索和领域专业知识来识别九种不同类型的可解释性方法,这些方法已用于根据生成的解释类型和技术相似性理解医学图像分析应用的深度学习模型。此外,我们报告了在评估各种可解释性方法产生的解释方面取得的进展。最后,我们讨论局限性,提供使用可解释性方法的指南和有关用于医学成像分析的深度神经网络的可解释性的未来方向。
    Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair, and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision-making process. Therefore, there is a need to ensure the interpretability of deep neural networks before they can be incorporated into the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally, we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    用于评估COVID-19患者的医学成像人工智能(AI)系统的开发已证明,在最近的COVID-19大流行期间,改善临床决策和评估患者预后的潜力。这些已被应用于许多医学成像任务,包括疾病诊断和患者预后,以及增强其他临床测量,以更好地为治疗决策提供信息。因为这些系统用于生死攸关的决定,临床实施依赖于用户对AI输出的信任。这导致许多开发人员利用可解释性技术来帮助用户了解AI算法何时可能成功以及哪些情况可能对自动评估有问题。从而增加了快速临床翻译的潜力。最近,人工智能在COVID-19中的应用受到了争议。这篇评论讨论了可解释和可解释的人工智能的几个方面,因为它与COVID-19疾病的评估有关,它可以恢复对人工智能应用于该疾病的信任。这包括识别与可解释的医学成像AI相关的常见任务,概述了几种适用于给定成像场景的可解释输出的现代方法,关于如何评估可解释的人工智能的讨论,以及可解释/可解释AI实施的最佳实践建议。这项审查将使COVID-19的AI系统开发人员能够快速了解几种可解释的AI技术的基础知识,并帮助选择一种适合给定场景且有效的方法。
    The development of medical imaging artificial intelligence (AI) systems for evaluating COVID-19 patients has demonstrated potential for improving clinical decision making and assessing patient outcomes during the recent COVID-19 pandemic. These have been applied to many medical imaging tasks, including disease diagnosis and patient prognosis, as well as augmented other clinical measurements to better inform treatment decisions. Because these systems are used in life-or-death decisions, clinical implementation relies on user trust in the AI output. This has caused many developers to utilize explainability techniques in an attempt to help a user understand when an AI algorithm is likely to succeed as well as which cases may be problematic for automatic assessment, thus increasing the potential for rapid clinical translation. AI application to COVID-19 has been marred with controversy recently. This review discusses several aspects of explainable and interpretable AI as it pertains to the evaluation of COVID-19 disease and it can restore trust in AI application to this disease. This includes the identification of common tasks that are relevant to explainable medical imaging AI, an overview of several modern approaches for producing explainable output as appropriate for a given imaging scenario, a discussion of how to evaluate explainable AI, and recommendations for best practices in explainable/interpretable AI implementation. This review will allow developers of AI systems for COVID-19 to quickly understand the basics of several explainable AI techniques and assist in the selection of an approach that is both appropriate and effective for a given scenario.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能(AI)最近使用传统的机器学习(ML)算法和尖端的深度学习(DL)架构改变了癌症研究和医学肿瘤学的格局。在这篇综述文章中,我们专注于AI在癌症研究中应用的ML方面,并就所使用的ML算法和数据提出了最具指示性的研究。PubMed和dblp数据库被认为获得了过去五年中最相关的研究工作。基于对拟议的研究及其研究临床结果的比较,涉及医学ML在癌症研究中的应用,确定了三种主要的临床情景.我们概述了著名的DL和强化学习(RL)方法,以及它们在临床实践中的应用,我们简要讨论了癌症研究中的系统生物学。我们还对疾病诊断的临床情况进行了全面检查,患者分类与癌症预后和生存率。在过去一年中确定的最相关的研究以及他们的主要发现。此外,我们研究了在健壮性方面的有效实施和需要解决的要点,预测模型的可解释性和透明性。最后,我们总结了AI/ML在癌症研究和医学肿瘤学领域的最新进展,以及在医疗保健系统中实施数据驱动模型以协助医生的日常实践之前需要解决的一些挑战和开放问题。
    Artificial Intelligence (AI) has recently altered the landscape of cancer research and medical oncology using traditional Machine Learning (ML) algorithms and cutting-edge Deep Learning (DL) architectures. In this review article we focus on the ML aspect of AI applications in cancer research and present the most indicative studies with respect to the ML algorithms and data used. The PubMed and dblp databases were considered to obtain the most relevant research works of the last five years. Based on a comparison of the proposed studies and their research clinical outcomes concerning the medical ML application in cancer research, three main clinical scenarios were identified. We give an overview of the well-known DL and Reinforcement Learning (RL) methodologies, as well as their application in clinical practice, and we briefly discuss Systems Biology in cancer research. We also provide a thorough examination of the clinical scenarios with respect to disease diagnosis, patient classification and cancer prognosis and survival. The most relevant studies identified in the preceding year are presented along with their primary findings. Furthermore, we examine the effective implementation and the main points that need to be addressed in the direction of robustness, explainability and transparency of predictive models. Finally, we summarize the most recent advances in the field of AI/ML applications in cancer research and medical oncology, as well as some of the challenges and open issues that need to be addressed before data-driven models can be implemented in healthcare systems to assist physicians in their daily practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into \"black box\" approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号