关键词: LIME Shapley additive explanations VeReMi dataset anomaly detection autonomous driving explainable AI feature extraction

来  源:   DOI:10.3390/s24113515   PDF(Pubmed)

Abstract:
The recent advancements in autonomous driving come with the associated cybersecurity issue of compromising networks of autonomous vehicles (AVs), motivating the use of AI models for detecting anomalies on these networks. In this context, the usage of explainable AI (XAI) for explaining the behavior of these anomaly detection AI models is crucial. This work introduces a comprehensive framework to assess black-box XAI techniques for anomaly detection within AVs, facilitating the examination of both global and local XAI methods to elucidate the decisions made by XAI techniques that explain the behavior of AI models classifying anomalous AV behavior. By considering six evaluation metrics (descriptive accuracy, sparsity, stability, efficiency, robustness, and completeness), the framework evaluates two well-known black-box XAI techniques, SHAP and LIME, involving applying XAI techniques to identify primary features crucial for anomaly classification, followed by extensive experiments assessing SHAP and LIME across the six metrics using two prevalent autonomous driving datasets, VeReMi and Sensor. This study advances the deployment of black-box XAI methods for real-world anomaly detection in autonomous driving systems, contributing valuable insights into the strengths and limitations of current black-box XAI methods within this critical domain.
摘要:
自动驾驶的最新进展伴随着损害自动驾驶汽车(AV)网络的相关网络安全问题。激励使用人工智能模型来检测这些网络上的异常。在这种情况下,使用可解释AI(XAI)来解释这些异常检测AI模型的行为至关重要。这项工作引入了一个全面的框架来评估用于AV中异常检测的黑盒XAI技术,促进对全局和局部XAI方法的检查,以阐明XAI技术做出的决策,这些决策解释了对异常AV行为进行分类的AI模型的行为。通过考虑六个评估指标(描述性准确性,稀疏,稳定性,效率,鲁棒性,和完整性),该框架评估了两种著名的黑盒XAI技术,SHAP和LIME,涉及应用XAI技术来识别对异常分类至关重要的主要特征,接下来是使用两个流行的自动驾驶数据集评估六个指标的SHAP和LIME的广泛实验,VeReMi和传感器。这项研究推进了黑盒XAI方法在自动驾驶系统中的真实世界异常检测的部署,在这一关键领域内,对当前黑箱XAI方法的优势和局限性做出有价值的见解。
公众号