XAI

XAI
  • 文章类型: Journal Article
    可解释的人工智能(XAI)阐明了复杂AI模型的决策过程,对于在模型预测中建立信任非常重要。XAI解释本身需要评估准确性和合理性,并在使用基础AI模型的背景下进行评估。这篇综述详细介绍了XAI在心脏AI应用中的评估,并发现,在检查的研究中,37%的人使用文献结果评价XAI质量,11%的人使用临床医生作为领域专家,11%使用代理或统计分析,其余43%的人根本没有评估使用的XAI。我们的目标是激发医疗保健领域的额外研究,敦促研究人员不仅应用XAI方法,而且系统地评估由此产生的解释,作为朝着开发值得信赖和安全的模型迈出的一步。
    在线版本包含补充材料,可在10.1007/s10462-024-10852-w获得。
    Explainable artificial intelligence (XAI) elucidates the decision-making process of complex AI models and is important in building trust in model predictions. XAI explanations themselves require evaluation as to accuracy and reasonableness and in the context of use of the underlying AI model. This review details the evaluation of XAI in cardiac AI applications and has found that, of the studies examined, 37% evaluated XAI quality using literature results, 11% used clinicians as domain-experts, 11% used proxies or statistical analysis, with the remaining 43% not assessing the XAI used at all. We aim to inspire additional studies within healthcare, urging researchers not only to apply XAI methods but to systematically assess the resulting explanations, as a step towards developing trustworthy and safe models.
    UNASSIGNED: The online version contains supplementary material available at 10.1007/s10462-024-10852-w.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:负责任的人工智能(RAI)强调使用实施问责制的道德框架,责任,和透明度,以解决部署和使用人工智能(AI)技术的问题,包括隐私,自主性,自决,偏见,和透明度。鉴于这些考虑,正在制定标准来指导人工智能的支持和实施。
    目的:本综述的目的是提供有关RAI原则的实施以及AI系统中道德问题的发生和解决的当前研究证据和知识差距的概述。
    方法:提出了针对系统评论的首选报告项目和针对范围审查的Meta分析扩展(PRISMA-ScR)指南的范围审查。PubMed,ERIC,Scopus,IEEEXplore,EBSCO,WebofScience,ACM数字图书馆,将系统地搜索自2013年以来发表的文章,以研究AI中的RAI原则和道德问题。资格评估将独立进行,编码数据将按照主题进行分析,并跨特定学科的文献进行分层。
    结果:结果将包含在完整的范围审查中,预计将于2024年6月开始,并在2024年底前完成出版物的提交。
    结论:本范围审查将总结证据的状态,并提供其影响的概述,以及优势,弱点,以及实施RAI原则的研究差距。审查还可能揭示特定学科的担忧,优先事项,并提出了解决这些问题的方法。因此,它将确定应成为未来可用监管选择重点的优先领域,将原则的伦理要求的理论方面与实际解决方案联系起来。
    PRR1-10.2196/52349。
    BACKGROUND: Responsible artificial intelligence (RAI) emphasizes the use of ethical frameworks implementing accountability, responsibility, and transparency to address concerns in the deployment and use of artificial intelligence (AI) technologies, including privacy, autonomy, self-determination, bias, and transparency. Standards are under development to guide the support and implementation of AI given these considerations.
    OBJECTIVE: The purpose of this review is to provide an overview of current research evidence and knowledge gaps regarding the implementation of RAI principles and the occurrence and resolution of ethical issues within AI systems.
    METHODS: A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines was proposed. PubMed, ERIC, Scopus, IEEE Xplore, EBSCO, Web of Science, ACM Digital Library, and ProQuest (Arts and Humanities) will be systematically searched for articles published since 2013 that examine RAI principles and ethical concerns within AI. Eligibility assessment will be conducted independently and coded data will be analyzed along themes and stratified across discipline-specific literature.
    RESULTS: The results will be included in the full scoping review, which is expected to start in June 2024 and completed for the submission of publication by the end of 2024.
    CONCLUSIONS: This scoping review will summarize the state of evidence and provide an overview of its impact, as well as strengths, weaknesses, and gaps in research implementing RAI principles. The review may also reveal discipline-specific concerns, priorities, and proposed solutions to the concerns. It will thereby identify priority areas that should be the focus of future regulatory options available, connecting theoretical aspects of ethical requirements for principles with practical solutions.
    UNASSIGNED: PRR1-10.2196/52349.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Systematic Review
    背景:机器学习(ML)的医疗用例呈指数级增长。第一批医院已经在日常工作中使用ML系统作为决策支持系统。同时,大多数机器学习系统仍然不透明,目前尚不清楚这些系统是如何实现其预测的。
    方法:在本文中,我们简要概述了可解释性方法的分类法,并回顾了流行的方法。此外,我们在PubMed上进行了系统的文献检索,以调查在450个特定的医学监督ML用例中使用了哪些可解释的人工智能(XAI)方法,最近如何使用XAI方法,以及描述ML管道的精度在过去20年里是如何演变的。
    结果:大部分具有ML用例的出版物根本没有使用XAI方法来解释ML预测。然而,当使用XAI方法时,开源和模型不可知的解释方法更常用,与SHapley加法扩张(SHAP)和梯度类激活映射(Grad-CAM)的表格和图像数据领先的方式。近年来,ML流水线的描述越来越详细和均匀。然而,共享数据和代码的意愿停滞在大约四分之一。
    结论:XAI方法主要在其应用需要很少的努力时使用。ML用例中报告的同质化有助于工作的可比性,并应在未来几年内得到推进。由于该领域的高度复杂性,在使用ML系统时,可以在信息学和医学界之间进行调解的专家将变得越来越需要。
    Medical use cases for machine learning (ML) are growing exponentially. The first hospitals are already using ML systems as decision support systems in their daily routine. At the same time, most ML systems are still opaque and it is not clear how these systems arrive at their predictions.
    In this paper, we provide a brief overview of the taxonomy of explainability methods and review popular methods. In addition, we conduct a systematic literature search on PubMed to investigate which explainable artificial intelligence (XAI) methods are used in 450 specific medical supervised ML use cases, how the use of XAI methods has emerged recently, and how the precision of describing ML pipelines has evolved over the past 20 years.
    A large fraction of publications with ML use cases do not use XAI methods at all to explain ML predictions. However, when XAI methods are used, open-source and model-agnostic explanation methods are more commonly used, with SHapley Additive exPlanations (SHAP) and Gradient Class Activation Mapping (Grad-CAM) for tabular and image data leading the way. ML pipelines have been described in increasing detail and uniformity in recent years. However, the willingness to share data and code has stagnated at about one-quarter.
    XAI methods are mainly used when their application requires little effort. The homogenization of reports in ML use cases facilitates the comparability of work and should be advanced in the coming years. Experts who can mediate between the worlds of informatics and medicine will become more and more in demand when using ML systems due to the high complexity of the domain.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    作为一种清洁能源,与当今时代的其他能源相比,核能具有独特的优势,低碳政策被广泛倡导。近几十年来,人工智能(AI)技术的指数增长在提高核反应堆的安全性和经济性方面带来了新的机遇和挑战。本研究简要介绍了现代人工智能算法,如机器学习,深度学习,和进化计算。此外,回顾和讨论了使用AI技术进行核反应堆设计优化以及运行和维护(O&M)的几项研究。现有的阻碍AI和核反应堆技术进一步融合从而可以扩展到现实世界问题的障碍分为两类:(1)数据问题:实验数据不足增加了数据分布漂移和数据失衡的可能性;(2)黑箱困境:深度学习等方法的可解释性较差。最后,本研究提出了人工智能和核反应堆技术未来融合的两个方向:(1)更好地将领域知识与数据驱动方法相结合,以减少对数据的高需求并提高模型性能和鲁棒性;(2)推广使用可解释的人工智能(XAI)技术,以增强模型的透明度和可靠性。此外,因果学习因其解决非分布泛化(OODG)问题的固有能力而值得进一步关注。
    As a form of clean energy, nuclear energy has unique advantages compared to other energy sources in the present era, where low-carbon policies are being widely advocated. The exponential growth of artificial intelligence (AI) technology in recent decades has resulted in new opportunities and challenges in terms of improving the safety and economics of nuclear reactors. This study briefly introduces modern AI algorithms such as machine learning, deep learning, and evolutionary computing. Furthermore, several studies on the use of AI techniques for nuclear reactor design optimization as well as operation and maintenance (O&M) are reviewed and discussed. The existing obstacles that prevent the further fusion of AI and nuclear reactor technologies so that they can be scaled to real-world problems are classified into two categories: (1) data issues: insufficient experimental data increases the possibility of data distribution drift and data imbalance; (2) black-box dilemma: methods such as deep learning have poor interpretability. Finally, this study proposes two directions for the future fusion of AI and nuclear reactor technologies: (1) better integration of domain knowledge with data-driven approaches to reduce the high demand for data and improve the model performance and robustness; (2) promoting the use of explainable artificial intelligence (XAI) technologies to enhance the transparency and reliability of the model. In addition, causal learning warrants further attention owing to its inherent ability to solve out-of-distribution generalization (OODG) problems.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能和深度机器学习的最新进展在如何衡量人类发展指标方面产生了重大变化。特别是基于资产的贫困。卫星图像和深度机器学习的结合现在有能力在接近传统家庭调查的水平上估计某些类型的贫困。除了静态估计之外,一个越来越重要的问题是这项技术是否可以促进科学发现,因此,贫困和福利领域的新知识。获得科学见解的基础是领域知识,这反过来又转化为可解释性和科学一致性。我们进行了综合文献综述,重点关注与此背景相关的三个核心要素--透明度,可解释性,和可解释性-并调查它们与贫困的关系,机器学习,和卫星图像关系。我们的论文纳入标准是涵盖贫困/财富预测,使用调查数据作为基本真相贫困/财富估计的基础,适用于城市和农村环境,使用卫星图像作为至少一些输入(特征)的基础,方法应包括深度神经网络。我们对32篇论文的回顾表明,可解释机器学习的三个核心要素(透明度,可解释性,和领域知识)是多种多样的,并不能完全满足为科学见解和发现而设置的要求。我们认为,可解释性对于支持该研究在开发界的广泛传播和接受至关重要,并且可解释性不仅仅意味着可解释性。
    Recent advances in artificial intelligence and deep machine learning have created a step change in how to measure human development indicators, in particular asset-based poverty. The combination of satellite imagery and deep machine learning now has the capability to estimate some types of poverty at a level close to what is achieved with traditional household surveys. An increasingly important issue beyond static estimations is whether this technology can contribute to scientific discovery and, consequently, new knowledge in the poverty and welfare domain. A foundation for achieving scientific insights is domain knowledge, which in turn translates into explainability and scientific consistency. We perform an integrative literature review focusing on three core elements relevant in this context-transparency, interpretability, and explainability-and investigate how they relate to the poverty, machine learning, and satellite imagery nexus. Our inclusion criteria for papers are that they cover poverty/wealth prediction, using survey data as the basis for the ground truth poverty/wealth estimates, be applicable to both urban and rural settings, use satellite images as the basis for at least some of the inputs (features), and the method should include deep neural networks. Our review of 32 papers shows that the status of the three core elements of explainable machine learning (transparency, interpretability, and domain knowledge) is varied and does not completely fulfill the requirements set up for scientific insights and discoveries. We argue that explainability is essential to support wider dissemination and acceptance of this research in the development community and that explainability means more than just interpretability.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into \"black box\" approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号