关键词: biomedical NLP clinical text natural language processing representation learning transformer models

来  源:   DOI:10.2196/23099   PDF(Pubmed)

Abstract:
BACKGROUND: Semantic textual similarity (STS) is a natural language processing (NLP) task that involves assigning a similarity score to 2 snippets of text based on their meaning. This task is particularly difficult in the domain of clinical text, which often features specialized language and the frequent use of abbreviations.
OBJECTIVE: We created an NLP system to predict similarity scores for sentence pairs as part of the Clinical Semantic Textual Similarity track in the 2019 n2c2/OHNLP Shared Task on Challenges in Natural Language Processing for Clinical Data. We subsequently sought to analyze the intermediary token vectors extracted from our models while processing a pair of clinical sentences to identify where and how representations of semantic similarity are built in transformer models.
METHODS: Given a clinical sentence pair, we take the average predicted similarity score across several independently fine-tuned transformers. In our model analysis we investigated the relationship between the final model\'s loss and surface features of the sentence pairs and assessed the decodability and representational similarity of the token vectors generated by each model.
RESULTS: Our model achieved a correlation of 0.87 with the ground-truth similarity score, reaching 6th place out of 33 teams (with a first-place score of 0.90). In detailed qualitative and quantitative analyses of the model\'s loss, we identified the system\'s failure to correctly model semantic similarity when both sentence pairs contain details of medical prescriptions, as well as its general tendency to overpredict semantic similarity given significant token overlap. The token vector analysis revealed divergent representational strategies for predicting textual similarity between bidirectional encoder representations from transformers (BERT)-style models and XLNet. We also found that a large amount information relevant to predicting STS can be captured using a combination of a classification token and the cosine distance between sentence-pair representations in the first layer of a transformer model that did not produce the best predictions on the test set.
CONCLUSIONS: We designed and trained a system that uses state-of-the-art NLP models to achieve very competitive results on a new clinical STS data set. As our approach uses no hand-crafted rules, it serves as a strong deep learning baseline for this task. Our key contribution is a detailed analysis of the model\'s outputs and an investigation of the heuristic biases learned by transformer models. We suggest future improvements based on these findings. In our representational analysis we explore how different transformer models converge or diverge in their representation of semantic signals as the tokens of the sentences are augmented by successive layers. This analysis sheds light on how these \"black box\" models integrate semantic similarity information in intermediate layers, and points to new research directions in model distillation and sentence embedding extraction for applications in clinical NLP.
摘要:
背景:语义文本相似性(STS)是一项自然语言处理(NLP)任务,涉及根据其含义为2个文本片段分配相似性得分。这项任务在临床文本领域特别困难,它通常具有专门的语言和频繁使用缩写。
目的:我们创建了一个NLP系统来预测句子对的相似性得分,作为2019年n2c2/OHNLP关于自然语言处理临床数据挑战的临床语义文本相似性跟踪的一部分。随后,我们试图分析从我们的模型中提取的中间标记向量,同时处理一对临床句子,以识别在转换模型中构建语义相似性表示的位置和方式。
方法:给定临床句子对,我们取几个独立微调变压器的平均预测相似性得分。在我们的模型分析中,我们研究了最终模型的损失与句子对的表面特征之间的关系,并评估了每个模型生成的标记向量的可解码性和代表性相似性。
结果:我们的模型与地面实况相似性得分的相关性为0.87,在33支球队中排名第六(第一名得分为0.90)。在对模型损失进行详细的定性和定量分析时,当两个句子对都包含医疗处方的细节时,我们发现系统无法正确建模语义相似性,以及在明显的标记重叠的情况下,其过度预测语义相似性的普遍趋势。令牌矢量分析揭示了不同的表示策略,用于预测来自变压器(BERT)样式模型和XLNet的双向编码器表示之间的文本相似性。我们还发现,可以使用分类标记和转换模型第一层中句子对表示之间的余弦距离的组合来捕获与预测STS相关的大量信息,该转换模型未对测试集产生最佳预测。
结论:我们设计并训练了一个系统,该系统使用最先进的NLP模型,以在新的临床STS数据集上获得非常有竞争力的结果。由于我们的方法不使用手工制作的规则,它是这项任务的强大深度学习基线。我们的主要贡献是对模型输出的详细分析以及对变压器模型学习到的启发式偏差的调查。基于这些发现,我们建议未来的改进。在我们的代表性分析中,我们探讨了随着句子的标记被连续层增强,不同的转换器模型在语义信号表示中如何收敛或发散。此分析揭示了这些“黑箱”模型如何在中间层中集成语义相似性信息,并指出了模型蒸馏和句子嵌入提取在临床NLP中应用的新研究方向。
公众号