最近,机器和深度学习中的可解释性已经成为研究和兴趣领域的重要领域,这两者都是由于越来越多地使用人工智能(AI)方法和对模型决策的理解。人工智能(XAI)的可解释性是由于人们意识的增强,除其他外,数据挖掘,错误消除,以及通过各种AI算法学习性能。此外,XAI将使模型在问题中做出的决策更加透明和有效。在这项研究中,决策树\'玻璃盒\'组中的模型,其中,和随机森林的“黑匣子”组,其中,建议了解选定类型的醋栗粉末的识别。进行了这些模型的学习过程,以确定准确性指标,如准确性,精度,召回,和F1得分。使用本地可解释模型不可知解释(LIME)进行可视化,以预测基于纹理描述符(如熵)识别特定类型的黑醋栗粉末的有效性。对比,相关性,相异,和同质性。装袋(Bagging_100),决策树(DT0),和随机森林(RF7_gini)被证明是在醋栗粉末可解释性框架中最有效的模型。分类器性能在准确性方面的度量,精度,召回,分别为Bagging_100和F1分数,达到约0.979的值。相比之下,DT0达到0.968、0.972、0.968和0.969的值,RF7_gini达到0.963、0.964、0.963和0.963的值。这些模型实现了大于96%的分类器性能测量。在未来,使用不可知模型的XAI可以成为帮助分析数据的另一个重要工具,包括食品,甚至在线。
Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the \'glass box\' group of Decision Tree, among others, and the \'black box\' group of Random Forest, among others, were proposed to understand the identification of selected types of currant
powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant
powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.