XAI

XAI
  • 文章类型: Journal Article
    可解释人工智能(XAI)已被越来越多地研究,以提高黑盒人工智能模型的透明度,促进用户更好的理解和信任。开发一个忠实于模型并对用户合理的XAI既是必要的,也是挑战。这项工作研究了将人类注意力知识嵌入到用于计算机视觉模型的基于显着性的XAI方法中是否可以增强其真实性和真实性。用于对象检测模型的两种新颖的XAI方法,即FullGrad-CAM和FullGrad-CAM++,首先开发了通过扩展当前基于梯度的XAI方法用于图像分类模型来生成特定于对象的解释。使用人类注意力作为客观的合理性度量,这些方法实现了更高的解释合理性。有趣的是,当应用于对象检测模型时,所有当前的XAI方法通常会产生比来自相同对象检测任务的人类注意力图更不忠实于模型的显著性图。因此,提出了人类注意力引导的XAI(HAG-XAI),以从人类注意力中学习如何通过使用可训练的激活函数和平滑内核来最佳地结合模型中的解释性信息以增强解释的合理性,以最大化XAI显著性图和人类注意力图之间的相似性。提出的XAI方法在广泛使用的BDD-100K上进行了评估,MS-COCO,和ImageNet数据集,并与典型的基于梯度和基于扰动的XAI方法进行比较。结果表明,HAG-XAI以牺牲图像分类模型的忠诚度为代价,增强了解释的合理性和用户的信任度,它增强了可信度,忠诚,同时和用户信任,并优于现有的用于对象检测模型的最新XAI方法。
    Explainable artificial intelligence (XAI) has been increasingly investigated to enhance the transparency of black-box artificial intelligence models, promoting better user understanding and trust. Developing an XAI that is faithful to models and plausible to users is both a necessity and a challenge. This work examines whether embedding human attention knowledge into saliency-based XAI methods for computer vision models could enhance their plausibility and faithfulness. Two novel XAI methods for object detection models, namely FullGrad-CAM and FullGrad-CAM++, were first developed to generate object-specific explanations by extending the current gradient-based XAI methods for image classification models. Using human attention as the objective plausibility measure, these methods achieve higher explanation plausibility. Interestingly, all current XAI methods when applied to object detection models generally produce saliency maps that are less faithful to the model than human attention maps from the same object detection task. Accordingly, human attention-guided XAI (HAG-XAI) was proposed to learn from human attention how to best combine explanatory information from the models to enhance explanation plausibility by using trainable activation functions and smoothing kernels to maximize the similarity between XAI saliency map and human attention map. The proposed XAI methods were evaluated on widely used BDD-100K, MS-COCO, and ImageNet datasets and compared with typical gradient-based and perturbation-based XAI methods. Results suggest that HAG-XAI enhanced explanation plausibility and user trust at the expense of faithfulness for image classification models, and it enhanced plausibility, faithfulness, and user trust simultaneously and outperformed existing state-of-the-art XAI methods for object detection models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    市场不确定性极大地干扰了市场参与者的决策和计划,从而增加了决策的风险,导致决策者的利益受损。棉花价格指数(以下简称棉价)波动幅度较大,非线性,并且是随机的,容易受到供求关系的影响,气候,替代品,和其他政策因素,这些都有很大的不确定性。降低决策风险,为决策者提供决策支持,本文在已有研究的基础上,整合了影响棉花价格指数波动的13个因素,并进一步分为交易数据和交互数据。构建了长短期记忆(LSTM)模型,并进行了比较实验,分析了棉花价格指数的波动性。为了使构建的模型可解释,我们使用可解释的人工智能(XAI)技术来执行输入特征的统计分析。实验结果表明,LSTM模型能够准确分析棉花价格指数波动趋势,但不能准确预测棉花的实际价格;交易数据加交互数据在分析棉花价格波动趋势时比交易数据更敏感,对棉花价格波动分析具有积极作用。本研究能够准确反映棉花市场的波动趋势,为国家提供参考,企业,和棉农决策,降低棉花价格频繁波动带来的风险。使用XAI技术对模型的分析建立了决策者对模型的信心。
    Market uncertainty greatly interferes with the decisions and plans of market participants, thus increasing the risk of decision-making, leading to compromised interests of decision-makers. Cotton price index (hereinafter referred to as cotton price) volatility is highly noisy, nonlinear, and stochastic and is susceptible to supply and demand, climate, substitutes, and other policy factors, which are subject to large uncertainties. To reduce decision risk and provide decision support for policymakers, this article integrates 13 factors affecting cotton price index volatility based on existing research and further divides them into transaction data and interaction data. A long- and short-term memory (LSTM) model is constructed, and a comparison experiment is implemented to analyze the cotton price index volatility. To make the constructed model explainable, we use explainable artificial intelligence (XAI) techniques to perform statistical analysis of the input features. The experimental results show that the LSTM model can accurately analyze the cotton price index fluctuation trend but cannot accurately predict the actual price of cotton; the transaction data plus interaction data are more sensitive than the transaction data in analyzing the cotton price fluctuation trend and can have a positive effect on the cotton price fluctuation analysis. This study can accurately reflect the fluctuation trend of the cotton market, provide reference to the state, enterprises, and cotton farmers for decision-making, and reduce the risk caused by frequent fluctuation of cotton prices. The analysis of the model using XAI techniques builds the confidence of decision-makers in the model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    作为一种清洁能源,与当今时代的其他能源相比,核能具有独特的优势,低碳政策被广泛倡导。近几十年来,人工智能(AI)技术的指数增长在提高核反应堆的安全性和经济性方面带来了新的机遇和挑战。本研究简要介绍了现代人工智能算法,如机器学习,深度学习,和进化计算。此外,回顾和讨论了使用AI技术进行核反应堆设计优化以及运行和维护(O&M)的几项研究。现有的阻碍AI和核反应堆技术进一步融合从而可以扩展到现实世界问题的障碍分为两类:(1)数据问题:实验数据不足增加了数据分布漂移和数据失衡的可能性;(2)黑箱困境:深度学习等方法的可解释性较差。最后,本研究提出了人工智能和核反应堆技术未来融合的两个方向:(1)更好地将领域知识与数据驱动方法相结合,以减少对数据的高需求并提高模型性能和鲁棒性;(2)推广使用可解释的人工智能(XAI)技术,以增强模型的透明度和可靠性。此外,因果学习因其解决非分布泛化(OODG)问题的固有能力而值得进一步关注。
    As a form of clean energy, nuclear energy has unique advantages compared to other energy sources in the present era, where low-carbon policies are being widely advocated. The exponential growth of artificial intelligence (AI) technology in recent decades has resulted in new opportunities and challenges in terms of improving the safety and economics of nuclear reactors. This study briefly introduces modern AI algorithms such as machine learning, deep learning, and evolutionary computing. Furthermore, several studies on the use of AI techniques for nuclear reactor design optimization as well as operation and maintenance (O&M) are reviewed and discussed. The existing obstacles that prevent the further fusion of AI and nuclear reactor technologies so that they can be scaled to real-world problems are classified into two categories: (1) data issues: insufficient experimental data increases the possibility of data distribution drift and data imbalance; (2) black-box dilemma: methods such as deep learning have poor interpretability. Finally, this study proposes two directions for the future fusion of AI and nuclear reactor technologies: (1) better integration of domain knowledge with data-driven approaches to reduce the high demand for data and improve the model performance and robustness; (2) promoting the use of explainable artificial intelligence (XAI) technologies to enhance the transparency and reliability of the model. In addition, causal learning warrants further attention owing to its inherent ability to solve out-of-distribution generalization (OODG) problems.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着城市化的发展,人工智能,通信技术,物联网,城市从传统的城市结构中进化出一种新的生态,也就是说,智慧城市。结合5G和大数据,智慧城市的应用已经扩展到居民生活的方方面面。基于通信设备和传感器的普及,数据传输和处理技术的极大提高,医疗领域的生产效率,工业领域,安全领域得到了改善。本章介绍了当前与智慧城市相关的研究,包括它的建筑,技术,涉及的设备。然后讨论了可解释人工智能(XAI)的挑战和机遇,这是AI的下一个重要发展方向,尤其是在医疗领域,患者和医务人员对AI模型的可解释性有不可忽视的需求。然后,以COVID-19为例,它讨论了智慧城市在病毒感染过程中如何发挥作用,并介绍了迄今为止设计的具体应用。最后,它讨论了当前形势的缺点和未来可以改进的方面。
    With the development of urbanization, artificial intelligence, communication technology, and the Internet of Things, cities have evolved a new ecology from traditional city structures, that is, smart city. Combining 5G and big data, the applications of smart cities have been extended to every aspect of residents\' lives. Based on the popularization of communication equipment and sensors, the great improvement in data transmission and processing technology, the production efficiency in medical field, industrial field, and security field has been improved. This chapter introduces the current research related to smart cities, including its architecture, technologies, and equipment involved. Then it discussed the challenges and opportunities of explainable artificial intelligence (XAI), which is the next important development direction of AI, especially in the medical field, where patients and medical personnel have non-negligible needs for the interpretability of AI models. Then, taking COVID-19 as an example, it discussed how smart cities play a role during virus infection and introduced the specific applications designed so far. Finally, it discussed the shortcomings of the current situation and the aspects that can be improved in the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    COVID-19大流行继续对世界人口的健康和福祉造成严重破坏。成功筛查感染患者是抗击感染的关键一步,使用胸部X线照相的放射学检查是最重要的筛查方法之一。对于COVID-19疾病的明确诊断,逆转录酶聚合酶链反应仍然是金标准。目前可用的实验室测试可能无法检测到所有受感染的个体;需要新的筛查方法。我们提出了一种多输入迁移学习COVID-Net模糊卷积神经网络来检测躯干X射线中的COVID-19实例,受到后者和这一研究领域的开源努力的推动。此外,我们使用一种可解释性方法来调查几个卷积网络COVID-Net预测,不仅是为了更深入地了解与COVID-19病例相关的关键因素,但也帮助临床医生改善筛查。我们证明了使用迁移学习和预训练模型,我们可以高精度地检测到它。使用X射线图像,我们选择了四个神经网络来预测其概率。最后,为了取得更好的结果,我们考虑了各种方法来验证这里提出的技术。因此,我们能够创建一个AUC为1.0和准确性的模型,精度,召回0.97。该模型被量化用于物联网设备,并保持了0.95%的准确性。
    The COVID-19 pandemic continues to wreak havoc on the world\'s population\'s health and well-being. Successful screening of infected patients is a critical step in the fight against it, with radiology examination using chest radiography being one of the most important screening methods. For the definitive diagnosis of COVID-19 disease, reverse-transcriptase polymerase chain reaction remains the gold standard. Currently available lab tests may not be able to detect all infected individuals; new screening methods are required. We propose a Multi-Input Transfer Learning COVID-Net fuzzy convolutional neural network to detect COVID-19 instances from torso X-ray, motivated by the latter and the open-source efforts in this research area. Furthermore, we use an explainability method to investigate several Convolutional Networks COVID-Net forecasts in an effort to not only gain deeper insights into critical factors associated with COVID-19 instances, but also to aid clinicians in improving screening. We show that using transfer learning and pre-trained models, we can detect it with a high degree of accuracy. Using X-ray images, we chose four neural networks to predict its probability. Finally, in order to achieve better results, we considered various methods to verify the techniques proposed here. As a result, we were able to create a model with an AUC of 1.0 and accuracy, precision, and recall of 0.97. The model was quantized for use in Internet of Things devices and maintained a 0.95 percent accuracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Biological resources are multifarious encompassing organisms, genetic materials, populations, or any other biotic components of ecosystems, and fine-grained data management and processing of these diverse types of resources proposes a tremendous challenge for both researchers and practitioners. Before the conceptualization of data lakes, former big data management platforms in the research fields of computational biology and biomedicine could not deal with many practical data management tasks very well. As an effective complement to those previous systems, data lakes were devised to store voluminous, varied, and diversely structured or unstructured data in their native formats, for the sake of various analyses like reporting, modeling, data exploration, knowledge discovery, data visualization, advanced analysis, and machine learning. Due to their intrinsic traits, data lakes are thought to be ideal technologies for processing of hybrid biological resources in the format of text, image, audio, video, and structured tabular data. This paper proposes a method for constructing a practical data lake system for processing multimodal biological data using a prototype system named ProtoDLS, especially from the explainability point of view, which is indispensable to the rigor, transparency, persuasiveness, and trustworthiness of the applications in the field. ProtoDLS adopts a horizontal pipeline to ensure the intra-component explainability factors from data acquisition to data presentation, and a vertical pipeline to ensure the inner-component explainability factors including mathematics, algorithm, execution time, memory consumption, network latency, security, and sampling size. The dual mechanism can ensure the explainability guarantees on the entirety of the data lake system. ProtoDLS proves that a single point of explainability cannot thoroughly expound the cause and effect of the matter from an overall perspective, and adopting a systematic, dynamic, and multisided way of thinking and a system-oriented analysis method is critical when designing a data processing system for biological resources.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号