Clinical explainability

  • 文章类型: Journal Article
    背景:头颈部癌的不同影像学表现,扫描仪,和采集参数说明了手动肿瘤分割任务的高度主观性。手动轮廓的可变性是缺乏可泛化性和基于深度学习(DL)的肿瘤自动分割模型的次优性能的原因之一。因此,开发了一种基于DL的方法,该方法以概率图而不是一个固定轮廓的形式输出每个PET-CT体素的预测肿瘤概率。这项研究的目的是证明DL生成的肿瘤分割概率图具有临床相关性,直观,和更合适的解决方案来辅助放射肿瘤学家对头颈部癌症患者的PET-CT图像进行大体肿瘤体积分割。
    方法:设计了图形用户界面(GUI),并开发了一个原型,允许用户与肿瘤概率图进行交互。此外,我们进行了一项用户研究,其中9位肿瘤勾画专家与界面原型及其功能进行了交互.对参与者的经验进行了定性和定量评估。
    结果:对放射肿瘤学家的访谈表明,他们倾向于在轮廓绘制过程中使用彩虹色图来可视化肿瘤概率图,他们发现直觉。他们还赞赏滑块功能,它通过允许选择阈值来创建用于编辑和用作起点的单个轮廓来促进交互。对原型的反馈强调了其出色的可用性和与临床工作流程的积极整合。
    结论:这项研究表明,DL生成的肿瘤概率图是可以解释的,透明,直观和更好的替代肿瘤分割模型的单一输出。
    BACKGROUND: The different tumor appearance of head and neck cancer across imaging modalities, scanners, and acquisition parameters accounts for the highly subjective nature of the manual tumor segmentation task. The variability of the manual contours is one of the causes of the lack of generalizability and the suboptimal performance of deep learning (DL) based tumor auto-segmentation models. Therefore, a DL-based method was developed that outputs predicted tumor probabilities for each PET-CT voxel in the form of a probability map instead of one fixed contour. The aim of this study was to show that DL-generated probability maps for tumor segmentation are clinically relevant, intuitive, and a more suitable solution to assist radiation oncologists in gross tumor volume segmentation on PET-CT images of head and neck cancer patients.
    METHODS: A graphical user interface (GUI) was designed, and a prototype was developed to allow the user to interact with tumor probability maps. Furthermore, a user study was conducted where nine experts in tumor delineation interacted with the interface prototype and its functionality. The participants\' experience was assessed qualitatively and quantitatively.
    RESULTS: The interviews with radiation oncologists revealed their preference for using a rainbow colormap to visualize tumor probability maps during contouring, which they found intuitive. They also appreciated the slider feature, which facilitated interaction by allowing the selection of threshold values to create single contours for editing and use as a starting point. Feedback on the prototype highlighted its excellent usability and positive integration into clinical workflows.
    CONCLUSIONS: This study shows that DL-generated tumor probability maps are explainable, transparent, intuitive and a better alternative to the single output of tumor segmentation models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    BACKGROUND: Machine learning (ML) models for early identification of patients at risk of hospital-acquired urinary tract infections (HA-UTI) may enable timely and targeted preventive and therapeutic strategies. However, clinicians are often challenged in the interpretation of the predictive outcomes provided by the ML models, which often reach different performances.
    OBJECTIVE: To train ML models for predicting patients at risk of HA-UTI using available data from electronic health records at the time of hospital admission. We focused on the performance of different ML models and clinical explainability.
    METHODS: This retrospective study investigated patient data representing 138.560 hospital admissions in the North Denmark Region from 01.01.2017 to 31.12.2018. We extracted 51 health socio-demographic and clinical features in a full dataset and used the χ2 test in addition to expert knowledge for feature selection, resulting in two reduced datasets. Seven different ML models were trained and compared between the three datasets. We applied the SHapley Additive exPlanation (SHAP) method to support population- and patient-level explainability.
    RESULTS: The best-performing ML model was a neural network based on the full dataset, reaching an area under the curve (AUC) of 0.758. The neural network was also the best-performing ML model based on the reduced datasets, reaching an AUC of 0.746. Clinical explainability was demonstrated with a SHAP summary- and forceplot.
    CONCLUSIONS: Within 24h of hospital admission, the ML models were able to identify patients at risk of developing HA-UTI, providing new opportunities to develop efficient strategies for the prevention of HA-UTI. Using SHAP, we demonstrate how risk predictions can be explained at individual patient level and for the patient population in general.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Medical experts may use Artificial Intelligence (AI) systems with greater trust if these are supported by \'contextual explanations\' that let the practitioner connect system inferences to their context of use. However, their importance in improving model usage and understanding has not been extensively studied. Hence, we consider a comorbidity risk prediction scenario and focus on contexts regarding the patients\' clinical state, AI predictions about their risk of complications, and algorithmic explanations supporting the predictions. We explore how relevant information for such dimensions can be extracted from Medical guidelines to answer typical questions from clinical practitioners. We identify this as a question answering (QA) task and employ several state-of-the-art Large Language Models (LLM) to present contexts around risk prediction model inferences and evaluate their acceptability. Finally, we study the benefits of contextual explanations by building an end-to-end AI pipeline including data cohorting, AI risk modeling, post-hoc model explanations, and prototyped a visual dashboard to present the combined insights from different context dimensions and data sources, while predicting and identifying the drivers of risk of Chronic Kidney Disease (CKD) - a common type-2 diabetes (T2DM) comorbidity. All of these steps were performed in deep engagement with medical experts, including a final evaluation of the dashboard results by an expert medical panel. We show that LLMs, in particular BERT and SciBERT, can be readily deployed to extract some relevant explanations to support clinical usage. To understand the value-add of the contextual explanations, the expert panel evaluated these regarding actionable insights in the relevant clinical setting. Overall, our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case. Our findings can help improve clinicians\' usage of AI models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号