背景:不良事件是指在医院对患者有潜在或实际伤害的事件。这些事件通常通过患者安全事件(PSE)报告进行记录。其中包括详细的叙述,提供有关事件的上下文信息。PSE报告的准确分类对于患者安全监测至关重要。然而,由于分类不一致和报告数量庞大,这一过程面临挑战。文本表示的最新进展,特别是从基于转换器的语言模型派生的上下文文本表示,为更精确的PSE报告分类提供了一个有前途的解决方案。集成机器学习(ML)分类器需要在人类专业知识和人工智能(AI)之间取得平衡。这种整合的核心是可解释性的概念,这对于建立信任和确保有效的人与人工智能协作至关重要。
目的:本研究旨在研究使用上下文文本表示训练的ML分类器在自动分类PSE报告中的功效。此外,该研究提出了一个界面,该界面将ML分类器与可解释性技术集成在一起,以促进PSE报告分类的人与人工智能协作。
方法:本研究使用了来自美国东南部一家大型学术医院产科的861份PSE报告的数据集。使用PSE报告的静态和上下文文本表示来训练各种ML分类器。使用多类分类度量和混淆矩阵评估训练的ML分类器。使用本地可解释模型不可知解释(LIME)技术来提供ML分类器预测的基本原理。为事件报告系统设计了将ML分类器与LIME技术集成的接口。
结果:使用上下文表示的最佳分类器能够获得75.4%(95/126)的准确性,而使用静态文本表示训练的最佳分类器的准确性为66.7%(84/126)。已设计了PSE报告界面,以促进PSE报告分类中的人类与AI协作。在这个设计中,ML分类器推荐前2个最可能的事件类型,以及对预测的解释,使PSE记者和患者安全分析师选择最合适的一个。LIME技术表明,分类器偶尔依赖于任意单词进行分类,强调人类监督的必要性。
结论:这项研究表明,使用上下文文本表示训练ML分类器可以显着提高PSE报告分类的准确性。本研究设计的界面为PSE报告分类中的人与人协作奠定了基础。从这项研究中获得的见解增强了PSE报告分类中的决策过程,使医院能够更有效地识别潜在的风险和危害,并使患者安全分析师能够及时采取行动,防止患者受到伤害。
BACKGROUND: Adverse events refer to incidents with potential or actual harm to patients in hospitals. These events are typically documented through patient safety event (PSE)
reports, which consist of detailed narratives providing contextual information on the occurrences. Accurate classification of PSE
reports is crucial for patient safety monitoring. However, this process faces challenges due to inconsistencies in classifications and the sheer volume of
reports. Recent advancements in text representation, particularly contextual text representation derived from transformer-based language models, offer a promising solution for more precise PSE report classification. Integrating the machine learning (ML) classifier necessitates a balance between human expertise and artificial intelligence (AI). Central to this integration is the concept of explainability, which is crucial for building trust and ensuring effective human-AI collaboration.
OBJECTIVE: This study aims to investigate the efficacy of ML classifiers trained using contextual text representation in automatically classifying PSE reports. Furthermore, the study presents an interface that integrates the ML classifier with the explainability technique to facilitate human-AI collaboration for PSE report classification.
METHODS: This study used a data set of 861 PSE
reports from a large academic hospital\'s maternity units in the Southeastern United States. Various ML classifiers were trained with both static and contextual text representations of PSE
reports. The trained ML classifiers were evaluated with multiclass classification metrics and the confusion matrix. The local interpretable model-agnostic explanations (LIME) technique was used to provide the rationale for the ML classifier\'s predictions. An interface that integrates the ML classifier with the LIME technique was designed for incident reporting systems.
RESULTS: The top-performing classifier using contextual representation was able to obtain an accuracy of 75.4% (95/126) compared to an accuracy of 66.7% (84/126) by the top-performing classifier trained using static text representation. A PSE reporting interface has been designed to facilitate human-AI collaboration in PSE report classification. In this design, the ML classifier recommends the top 2 most probable event types, along with the explanations for the prediction, enabling PSE reporters and patient safety analysts to choose the most suitable one. The LIME technique showed that the classifier occasionally relies on arbitrary words for classification, emphasizing the necessity of human oversight.
CONCLUSIONS: This study demonstrates that training ML classifiers with contextual text representations can significantly enhance the accuracy of PSE report classification. The interface designed in this study lays the foundation for human-AI collaboration in the classification of PSE reports. The insights gained from this research enhance the decision-making process in PSE report classification, enabling hospitals to more efficiently identify potential risks and hazards and enabling patient safety analysts to take timely actions to prevent patient harm.