BERT

BERT
  • 文章类型: Journal Article
    针对潜在暴力患者的积极缓解措施可以预防急诊科(ED)中的患者暴力。我们的目的是在经过验证的风险评估工具的指导下评估两种干预措施的效果。
    一项前瞻性干预研究是在密歇根州两次就诊的≥10年的患者中进行的,美国,从2022年10月到2023年8月。在分诊期间,ED护士完成了ED的攻击行为风险评估工具(ABRAT-ED),以识别高危患者。基线观察期后,对高危患者逐步实施干预措施:第一阶段为标牌张贴,第二阶段为主动行为应急响应小组(BERT)加入标牌张贴.在ED处置之前,任何暴力事件及其严重性都有记录。研究完成后,对数据进行回顾性检索。
    在77,424名可评估患者中,546例暴力事件≥1例。暴力事件发生率为0.93%,0.68%,基线为0.62%,阶段1和阶段2。与基线相比,1期暴力事件的相对风险为0.73(95%置信区间[CI]:0.59-0.90;p=0.003)。与第1阶段相比,第2阶段的相对风险为0.92(95%CI:0.76-1.12;p=0.418)。
    对于ABRAT-ED识别的高危患者,使用标牌张贴作为持续的视觉提示似乎可以有效降低总体暴力事件发生率。然而,在标牌张贴中添加主动BERThuddle显示,与单独张贴标牌相比,暴力事件发生率没有显着降低。
    UNASSIGNED: Patient violence in emergency departments (EDs) may be prevented with proactive mitigation measures targeting potentially violent patients. We aimed to evaluate the effects of two interventions guided by a validated risk-assessment tool.
    UNASSIGNED: A prospective interventional study was conducted among patients ≥10 years who visited two EDs in Michigan, USA, from October 2022 to August 2023. During triage, the ED nurses completed the Aggressive Behavior Risk Assessment Tool for EDs (ABRAT-ED) to identify high-risk patients. Following the baseline observational period, interventions were implemented stepwise for the high-risk patients: phase 1 period with signage posting and phase 2 period with a proactive Behavioral Emergency Response Team (BERT) huddle added to the signage posting. Before ED disposition, any violent events and their severities were documented. The data were retrieved retrospectively after the study was completed.
    UNASSIGNED: Of 77,424 evaluable patients, 546 had ≥1 violent event. The violent event rates were 0.93%, 0.68%, and 0.62% for baseline, phase 1, and phase 2, respectively. The relative risk of violent events for phase 1 compared to the baseline was 0.73 (95% confidence interval [CI]: 0.59‒0.90; p = 0.003). The relative risk for phase 2 compared to phase 1 was 0.92 (95% CI: 0.76‒1.12; p = 0.418).
    UNASSIGNED: The use of signage posting as a persistent visual cue for high-risk patients identified by ABRAT-ED appears to be effective in reducing the overall violent event rates. However, adding proactive BERT huddle to signage posting showed no significant reduction in the violent event rates compared to signage posting alone.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:生物医学关系提取(RE)对于揭示文本中生物医学实体之间的复杂关系至关重要。然而,训练RE分类器在低资源生物医学应用中具有挑战性,只有很少的标记例子。
    方法:我们探索最短依赖路径(SDP)的潜力,以帮助生物医学RE,特别是在标签例子有限的情况下。在这项研究中,我们建议在监督下创建单词和句子表示时采用各种方法来使用SDP,半监督,和上下文学习设置。
    结果:通过对三个基准生物医学文本数据集的实验,我们发现合并基于SDP的表示可以增强RE分类器的性能。在处理少量标记数据时,这种改进尤其显著。
    结论:SDP为许多生物医学文本段落中发现的复杂句子结构提供了有价值的见解。我们的研究介绍了几种简单的技术,如实验证明,有效地提高了RE分类器的准确率。
    BACKGROUND: Biomedical Relation Extraction (RE) is essential for uncovering complex relationships between biomedical entities within text. However, training RE classifiers is challenging in low-resource biomedical applications with few labeled examples.
    METHODS: We explore the potential of Shortest Dependency Paths (SDPs) to aid biomedical RE, especially in situations with limited labeled examples. In this study, we suggest various approaches to employ SDPs when creating word and sentence representations under supervised, semi-supervised, and in-context-learning settings.
    RESULTS: Through experiments on three benchmark biomedical text datasets, we find that incorporating SDP-based representations enhances the performance of RE classifiers. The improvement is especially notable when working with small amounts of labeled data.
    CONCLUSIONS: SDPs offer valuable insights into the complex sentence structure found in many biomedical text passages. Our study introduces several straightforward techniques that, as demonstrated experimentally, effectively enhance the accuracy of RE classifiers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    转移性乳腺癌(MBC)仍然是女性癌症相关死亡的主要原因。这项工作介绍了一种创新的非侵入性乳腺癌分类模型,旨在改善癌症转移的识别。虽然这项研究标志着预测MBC的初步探索,额外的调查对于验证MBC的发生至关重要.我们的方法结合了大型语言模型(LLM)的优势,特别是来自变压器(BERT)模型的双向编码器表示,图神经网络(GNN)的强大功能,可根据组织病理学报告预测MBC患者。本文介绍了一种用于转移性乳腺癌预测(BG-MBC)的BERT-GNN方法,该方法集成了从BERT模型得出的图形信息。在这个模型中,节点是根据病人的医疗记录构建的,虽然BERT嵌入被用来对组织病理学报告中的单词进行矢量化表示,从而通过采用三种不同的方法(即单变量选择,用于特征重要性的额外树分类器,和Shapley值,以确定影响最显著的特征)。确定在模型训练期间作为嵌入生成的676个中最关键的30个特征,我们的模型进一步增强了其预测能力。BG-MBC模型具有出色的准确性,在识别MBC患者时,检出率为0.98,曲线下面积(AUC)为0.98。这种显著的表现归功于模型对LLM从组织病理学报告中产生的注意力得分的利用,有效地捕获相关特征进行分类。
    Metastatic breast cancer (MBC) continues to be a leading cause of cancer-related deaths among women. This work introduces an innovative non-invasive breast cancer classification model designed to improve the identification of cancer metastases. While this study marks the initial exploration into predicting MBC, additional investigations are essential to validate the occurrence of MBC. Our approach combines the strengths of large language models (LLMs), specifically the bidirectional encoder representations from transformers (BERT) model, with the powerful capabilities of graph neural networks (GNNs) to predict MBC patients based on their histopathology reports. This paper introduces a BERT-GNN approach for metastatic breast cancer prediction (BG-MBC) that integrates graph information derived from the BERT model. In this model, nodes are constructed from patient medical records, while BERT embeddings are employed to vectorise representations of the words in histopathology reports, thereby capturing semantic information crucial for classification by employing three distinct approaches (namely univariate selection, extra trees classifier for feature importance, and Shapley values to identify the features that have the most significant impact). Identifying the most crucial 30 features out of 676 generated as embeddings during model training, our model further enhances its predictive capabilities. The BG-MBC model achieves outstanding accuracy, with a detection rate of 0.98 and an area under curve (AUC) of 0.98, in identifying MBC patients. This remarkable performance is credited to the model\'s utilisation of attention scores generated by the LLM from histopathology reports, effectively capturing pertinent features for classification.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在全球化的浪潮中,文化融合现象激增,突出强调跨文化交际中固有的挑战。为了应对这些挑战,当代研究已将重点转移到人机对话上。尤其是在人机对话的教育范式中,分析用户对话中的情感识别尤为重要。准确识别和理解用户的情感倾向以及人机交互和游戏的效率和体验。本研究旨在提高人机对话中的语言情感识别能力。它提出了一种基于来自变压器(BERT)的双向编码器表示的混合模型(BCBA),卷积神经网络(CNN),双向门控递归单位(BiGRU),注意机制。该模型利用BERT模型从文本中提取语义和句法特征。同时,它集成了CNN和BiGRU网络,以更深入地研究文本特征,增强模型在细致入微的情感识别方面的熟练程度。此外,通过引入注意力机制,该模型可以根据单词的情绪倾向为单词分配不同的权重。这使其能够优先考虑具有可辨别的情绪倾向的单词,以进行更精确的情绪分析。通过在两个数据集上的实验验证,BCBA模型在情感识别和分类任务中取得了显著的效果。该模型的准确性和F1得分都有了显著提高,平均准确率为0.84,平均F1评分为0.8。混淆矩阵分析揭示了该模型的最小分类错误率。此外,随着迭代次数的增加,模型的召回率稳定在约0.7。这一成就展示了该模型在语义理解和情感分析方面的强大功能,并展示了其在跨文化背景下处理语言表达中的情感特征方面的优势。本研究提出的BCBA模型为人机对话中的情感识别提供了有效的技术支持,这对于构建更加智能、人性化的人机交互系统具有重要意义。在未来,我们将继续优化模型的结构,提高其处理复杂情绪和跨语言情绪识别的能力,并探索将该模型应用于更多的实际场景,进一步促进人机对话技术的发展和应用。
    Amid the wave of globalization, the phenomenon of cultural amalgamation has surged in frequency, bringing to the fore the heightened prominence of challenges inherent in cross-cultural communication. To address these challenges, contemporary research has shifted its focus to human-computer dialogue. Especially in the educational paradigm of human-computer dialogue, analysing emotion recognition in user dialogues is particularly important. Accurately identify and understand users\' emotional tendencies and the efficiency and experience of human-computer interaction and play. This study aims to improve the capability of language emotion recognition in human-computer dialogue. It proposes a hybrid model (BCBA) based on bidirectional encoder representations from transformers (BERT), convolutional neural networks (CNN), bidirectional gated recurrent units (BiGRU), and the attention mechanism. This model leverages the BERT model to extract semantic and syntactic features from the text. Simultaneously, it integrates CNN and BiGRU networks to delve deeper into textual features, enhancing the model\'s proficiency in nuanced sentiment recognition. Furthermore, by introducing the attention mechanism, the model can assign different weights to words based on their emotional tendencies. This enables it to prioritize words with discernible emotional inclinations for more precise sentiment analysis. The BCBA model has achieved remarkable results in emotion recognition and classification tasks through experimental validation on two datasets. The model has significantly improved both accuracy and F1 scores, with an average accuracy of 0.84 and an average F1 score of 0.8. The confusion matrix analysis reveals a minimal classification error rate for this model. Additionally, as the number of iterations increases, the model\'s recall rate stabilizes at approximately 0.7. This accomplishment demonstrates the model\'s robust capabilities in semantic understanding and sentiment analysis and showcases its advantages in handling emotional characteristics in language expressions within a cross-cultural context. The BCBA model proposed in this study provides effective technical support for emotion recognition in human-computer dialogue, which is of great significance for building more intelligent and user-friendly human-computer interaction systems. In the future, we will continue to optimize the model\'s structure, improve its capability in handling complex emotions and cross-lingual emotion recognition, and explore applying the model to more practical scenarios to further promote the development and application of human-computer dialogue technology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目标:随着医疗保健环境中有效的PPM分诊的需求和工作量的增加,患者门户消息(PPM)激增,刺激了对AI驱动解决方案的探索,以简化医疗保健工作流程。确保及时响应患者以满足他们的医疗保健需求。然而,在PPMs中,人们较少关注隔离和理解患者的主要关注点,这种做法有可能产生更细致入微的见解,并提高医疗保健服务和以患者为中心的护理质量.
    方法:我们提出了一种融合框架,通过卷积神经网络利用具有不同语言优势的预训练语言模型(LM),通过多类分类精确识别患者的主要关注点。我们研究了3种传统的机器学习模型,9个基于BERT的语言模型,6个融合模型,和2合奏模型。
    结果:我们的实验结果强调了基于BERT的模型与传统机器学习模型相比所取得的卓越性能。值得注意的是,我们的融合模型成为性能最好的解决方案,显著提高了总体平均准确率77.67±2.74%和F1评分74.37±3.70%.
    结论:本研究强调了用于患者主要关注检测的多类别分类的可行性和有效性,以及用于增强主要关注检测的拟议融合框架。
    结论:通过融合多个预先训练的LM来增强多类分类的使用不仅提高了PPM中患者主要关注点识别的准确性和效率,而且有助于管理不断增长的PPM在医疗保健中的数量。确保及时和准确地解决关键的病人沟通。
    OBJECTIVE: The surge in patient portal messages (PPMs) with increasing needs and workloads for efficient PPM triage in healthcare settings has spurred the exploration of AI-driven solutions to streamline the healthcare workflow processes, ensuring timely responses to patients to satisfy their healthcare needs. However, there has been less focus on isolating and understanding patient primary concerns in PPMs-a practice which holds the potential to yield more nuanced insights and enhances the quality of healthcare delivery and patient-centered care.
    METHODS: We propose a fusion framework to leverage pretrained language models (LMs) with different language advantages via a Convolution Neural Network for precise identification of patient primary concerns via multi-class classification. We examined 3 traditional machine learning models, 9 BERT-based language models, 6 fusion models, and 2 ensemble models.
    RESULTS: The outcomes of our experimentation underscore the superior performance achieved by BERT-based models in comparison to traditional machine learning models. Remarkably, our fusion model emerges as the top-performing solution, delivering a notably improved accuracy score of 77.67 ± 2.74% and an F1 score of 74.37 ± 3.70% in macro-average.
    CONCLUSIONS: This study highlights the feasibility and effectiveness of multi-class classification for patient primary concern detection and the proposed fusion framework for enhancing primary concern detection.
    CONCLUSIONS: The use of multi-class classification enhanced by a fusion of multiple pretrained LMs not only improves the accuracy and efficiency of patient primary concern identification in PPMs but also aids in managing the rising volume of PPMs in healthcare, ensuring critical patient communications are addressed promptly and accurately.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    推理解决是自然语言处理中的关键任务。很难评估大跨度文本的相似性,这使得文本级编码有些挑战。本文首先比较了常用的方法来提高模型的全局信息收集能力对BERT编码性能的影响。基于此,为了提高BERT编码模型在不同文本跨度下的适用性,设计了多尺度上下文信息模块。此外,通过尺寸扩展提高线性可分性。最后,使用交叉熵损失作为损失函数。在本文设计的模块中添加BERT和spanBERT后,F1分别增加了0.5%和0.2%,分别。
    Coreference resolution is a key task in Natural Language Processing. It is difficult to evaluate the similarity of long-span texts, which makes text-level encoding somewhat challenging. This paper first compares the impact of commonly used methods to improve the global information collection ability of the model on the BERT encoding performance. Based on this, a multi-scale context information module is designed to improve the applicability of the BERT encoding model under different text spans. In addition, improving linear separability through dimension expansion. Finally, cross-entropy loss is used as the loss function. After adding BERT and span BERT to the module designed in this article, F1 increased by 0.5% and 0.2%, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    自然语言处理(NLP)中的许多最新结果依赖于大型预训练语言模型(PLM)。这些模型由大量参数组成,这些参数使用大量训练数据进行调整。这些因素导致模型记忆部分训练数据,使他们容易受到各种隐私攻击。这令人担忧,特别是当这些模型应用于临床领域时,数据非常敏感。训练数据假名化是旨在缓解这些问题的隐私保护技术。此技术会自动识别敏感实体,并将其替换为现实但不敏感的代理。在先前的研究中,假名化已产生了有希望的结果。然而,以前没有研究对PLM的训练前数据和用于解决临床NLP任务的微调数据应用假名.这项研究评估了针对五个临床NLP任务进行微调的瑞典临床BERT模型的端到端假名化预测性能的影响。进行了大量的统计检验,在使用假名微调数据时显示对性能的最小危害。结果也没有发现预训练和微调数据的端到端假名化的恶化。这些结果表明,可以在不损害训练PLM的数据效用的情况下,对训练数据进行假名化以降低隐私风险。
    Many state-of-the-art results in natural language processing (NLP) rely on large pre-trained language models (PLMs). These models consist of large amounts of parameters that are tuned using vast amounts of training data. These factors cause the models to memorize parts of their training data, making them vulnerable to various privacy attacks. This is cause for concern, especially when these models are applied in the clinical domain, where data are very sensitive. Training data pseudonymization is a privacy-preserving technique that aims to mitigate these problems. This technique automatically identifies and replaces sensitive entities with realistic but non-sensitive surrogates. Pseudonymization has yielded promising results in previous studies. However, no previous study has applied pseudonymization to both the pre-training data of PLMs and the fine-tuning data used to solve clinical NLP tasks. This study evaluates the effects on the predictive performance of end-to-end pseudonymization of Swedish clinical BERT models fine-tuned for five clinical NLP tasks. A large number of statistical tests are performed, revealing minimal harm to performance when using pseudonymized fine-tuning data. The results also find no deterioration from end-to-end pseudonymization of pre-training and fine-tuning data. These results demonstrate that pseudonymizing training data to reduce privacy risks can be done without harming data utility for training PLMs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    早期的癌症检测和治疗取决于发现导致癌症的特定基因。遗传突变的分类最初是手动完成的。然而,这个过程依赖于病理学家,可能是一项耗时的任务。因此,为了提高临床解释的精度,研究人员开发了利用下一代测序技术进行自动化突变分析的计算算法.本文利用四个深度学习分类模型和生物医学文本的训练集合。这些模型包括来自生物医学文本挖掘变压器(BioBERT)的双向编码器表示,为生物上下文实现的专用语言模型。在多个任务中令人印象深刻的结果,包括文本分类,语言推理,和问题回答,可以通过简单地添加一个额外的层到Biobert模型获得。此外,来自变压器(BERT)的双向编码器表示,长短期记忆(LSTM),和双向LSTM(BiLSTM)已被利用在基于文本证据对基因突变进行分类方面产生非常好的结果。工作中使用的数据集是由纪念斯隆·凯特琳癌症中心(MSKCC)创建的,其中包含几个突变。此外,该数据集在Kaggle研究预测竞赛中构成了重大分类挑战。在开展工作中,确定了三个挑战:巨大的文本长度,数据的偏见表示,和重复的数据实例。根据常用的评估指标,实验结果表明,BioBERT模型优于其他模型,F1得分为0.87和0.850MCC,与使用BERT模型获得的F1评分为0.70的文献中的类似结果相比,这可以被认为是改进的性能。
    Early cancer detection and treatment depend on the discovery of specific genes that cause cancer. The classification of genetic mutations was initially done manually. However, this process relies on pathologists and can be a time-consuming task. Therefore, to improve the precision of clinical interpretation, researchers have developed computational algorithms that leverage next-generation sequencing technologies for automated mutation analysis. This paper utilized four deep learning classification models with training collections of biomedical texts. These models comprise bidirectional encoder representations from transformers for Biomedical text mining (BioBERT), a specialized language model implemented for biological contexts. Impressive results in multiple tasks, including text classification, language inference, and question answering, can be obtained by simply adding an extra layer to the BioBERT model. Moreover, bidirectional encoder representations from transformers (BERT), long short-term memory (LSTM), and bidirectional LSTM (BiLSTM) have been leveraged to produce very good results in categorizing genetic mutations based on textual evidence. The dataset used in the work was created by Memorial Sloan Kettering Cancer Center (MSKCC), which contains several mutations. Furthermore, this dataset poses a major classification challenge in the Kaggle research prediction competitions. In carrying out the work, three challenges were identified: enormous text length, biased representation of the data, and repeated data instances. Based on the commonly used evaluation metrics, the experimental results show that the BioBERT model outperforms other models with an F1 score of 0.87 and 0.850 MCC, which can be considered as improved performance compared to similar results in the literature that have an F1 score of 0.70 achieved with the BERT model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在各种分子性质及其组合中,通过理论或实验获得所需的分子性质是一个昂贵的过程。使用机器学习来分析分子结构特征并预测分子特性是加速分子特性预测的潜在有效替代方案。在这项研究中,我们从机器学习的角度通过分子结构分析分子性质。我们使用SMILES序列作为人工神经网络的输入,以提取分子结构特征并预测分子特性。SMILES序列包含表示分子结构的符号。为了解决SMILES序列与实际分子结构数据不同的问题,我们提出了一种基于BERT模型的SMILES序列的预训练模型,广泛用于自然语言处理,使得模型学习提取SMILES序列中包含的分子结构信息。在一个实验中,我们首先使用100,000个SMILES序列对所提出的模型进行预训练,然后使用预训练的模型预测22个数据集的分子特性和分子的气味特征(98种气味描述符)。实验结果表明,我们提出的预训练模型有效地提高了分子属性预测科学贡献的性能:提出了2编码器预训练,其重点是在SMILES中符号对上下文环境的依赖性低于自然语言句子中的符号对上下文环境的依赖性,以及一个化合物对多个SMILES序列的对应。与擅长自然语言的BERT相比,使用2编码器预训练的模型在分子属性预测任务中显示出更高的鲁棒性。
    Among the various molecular properties and their combinations, it is a costly process to obtain the desired molecular properties through theory or experiment. Using machine learning to analyze molecular structure features and to predict molecular properties is a potentially efficient alternative for accelerating the prediction of molecular properties. In this study, we analyze molecular properties through the molecular structure from the perspective of machine learning. We use SMILES sequences as inputs to an artificial neural network in extracting molecular structural features and predicting molecular properties. A SMILES sequence comprises symbols representing molecular structures. To address the problem that a SMILES sequence is different from actual molecular structural data, we propose a pretraining model for a SMILES sequence based on the BERT model, which is widely used in natural language processing, such that the model learns to extract the molecular structural information contained in the SMILES sequence. In an experiment, we first pretrain the proposed model with 100,000 SMILES sequences and then use the pretrained model to predict molecular properties on 22 data sets and the odor characteristics of molecules (98 types of odor descriptor). The experimental results show that our proposed pretraining model effectively improves the performance of molecular property prediction SCIENTIFIC CONTRIBUTION: The 2-encoder pretraining is proposed by focusing on the lower dependency of symbols to the contextual environment in a SMILES than one in a natural language sentence and the corresponding of one compound to multiple SMILES sequences. The model pretrained with 2-encoder shows higher robustness in tasks of molecular properties prediction compared to BERT which is adept at natural language.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    知识图完成旨在预测知识图中实体之间的缺失关系。知识图嵌入是知识图完成的有效途径之一。然而,现有的嵌入方法通常专注于开发更深入、更复杂的神经网络,或利用其他信息,这不可避免地增加了计算复杂性,并且对实时应用程序不友好。在这篇文章中,我们提出了一种有效的BERT增强的浅层神经网络模型,用于知识图的完成,称为ShallowBKGC。具体来说,给定一个实体对,我们首先应用预训练的语言模型BERT来提取头部和尾部实体的文本特征。同时,我们使用嵌入层提取头部和尾部实体的结构特征。然后,通过平均运算将文本和结构特征集成到一个实体对表示中,然后进行非线性变换。最后,基于实体对表示,我们通过多标签建模来计算每个关系的概率,以预测给定实体对的关系。在三个基准数据集上的实验结果表明,与基准方法相比,我们的模型具有优越的性能。本文的源代码可以从https://github.com/Joni-gogogo/ShallowBKGC获得。
    Knowledge graph completion aims to predict missing relations between entities in a knowledge graph. One of the effective ways for knowledge graph completion is knowledge graph embedding. However, existing embedding methods usually focus on developing deeper and more complex neural networks, or leveraging additional information, which inevitably increases computational complexity and is unfriendly to real-time applications. In this article, we propose an effective BERT-enhanced shallow neural network model for knowledge graph completion named ShallowBKGC. Specifically, given an entity pair, we first apply the pre-trained language model BERT to extract text features of head and tail entities. At the same time, we use the embedding layer to extract structure features of head and tail entities. Then the text and structure features are integrated into one entity-pair representation via average operation followed by a non-linear transformation. Finally, based on the entity-pair representation, we calculate probability of each relation through multi-label modeling to predict relations for the given entity pair. Experimental results on three benchmark datasets show that our model achieves a superior performance in comparison with baseline methods. The source code of this article can be obtained from https://github.com/Joni-gogogo/ShallowBKGC.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号