knowledge base

知识库
  • 文章类型: Journal Article
    背景:塔式起重机通常用于建筑项目,尽管对所涉及的劳动力构成重大危害。
    方法:为了解决这些安全问题,已经开发了基于知识的安全风险评估决策支持系统(KBDSS-SRA)。通过在各种建设工作中的使用,说明了该系统彻底评估相关风险的能力。
    结果:该系统实现了以下目标:(1)编制塔式起重机操作特有的基本风险因素,(2)识别危害工人福祉的关键安全风险,(3)审查和评估已识别的安全风险,(4)使劳动密集型和容易出错的安全风险评估过程自动化。KBDSS-SRA协助安全管理人员制定有依据的决策,并实施有效措施,以提高塔式起重机操作的安全性。
    结论:这是由先进的计算机化工具促进的,该工具强调了安全风险的最重要意义,并提出了未来缓解风险的策略。
    BACKGROUND: Tower cranes are commonly employed in construction projects, despite presenting significant hazards to the workforce involved.
    METHODS: To address these safety concerns, a Knowledge-Based Decision-Support System for Safety Risk Assessment (KBDSS-SRA) has been developed. The system\'s capacity to thoroughly evaluate associated risks is illustrated through its utilization in various construction endeavors.
    RESULTS: The system accomplishes the following goals: (1) compiles essential risk factors specific to tower crane operations, (2) identifies critical safety risks that jeopardize worker well-being, (3) examines and assesses the identified safety risks, and (4) automates the labor-intensive and error-prone processes of safety risk assessment. The KBDSS-SRA assists safety management personnel in formulating well-grounded decisions and implementing effective measures to enhance the safety of tower crane operations.
    CONCLUSIONS: This is facilitated by an advanced computerized tool that underscores the paramount significance of safety risks and suggests strategies for their future mitigation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    拟议的学生成绩评估智能系统(SPA)是通过衡量学生在特定课程中的学习成果(CLO)成绩来评估学生的知识和技能成就的系统。讲师定义了方面,重量,和SPA使用的评定量表来分析每门课程。该系统计算每个学习结果中学生的平均分数,并将其与CLO目标和分数进行比较,以确定所使用的教学方法的有效性。该系统使用从课程大纲和Bloom\的分类法中提取的事实和规则来构建其知识库。本文介绍了SPA推理机的开发,用于根据课程级别查找CLO目标。推理引擎使用有效的程序和预测过程来确定正确的目标和分数,为推理知识库中的信息和制定结论提供可靠和可理解的方法。SPA是一个高度响应和智能的系统,可以测量学生\'的成就一个有价值的工具。其特点包括高性能,可靠性,和清晰度,认知系统和认知理论的结合在衡量学生成绩方面取得了显著进展。限制包括依赖准确的课程内容和初始设置时间,CLO权重分配中的潜在偏差,将SPA与现有机构数据库集成的挑战,需要不断更新知识库以反映课程变化,以及教育工作者对采用新技术的潜在抵制。未来的改进可能涉及自适应学习集成,增强的用户界面,以及在不同教育环境中更广泛的适用性。
    The proposed smart system for Student Performance Assessment (SPA) is a system that evaluates students\' knowledge and skill attainment in a specific course by measuring their achievements of the Course Learning Outcomes (CLOs). The instructor defines the aspects, weights, and rating scale used by SPA to analyze each course. The system calculates the average of students\' marks in each learning outcome and compares them with the CLO targets and scores to determine the effectiveness of the teaching and learning methods used. The system uses facts and rules extracted from the course syllabus and Bloom\'s Taxonomy to build its knowledge base. This paper presents the development of the SPA inference engine, which is used to find CLO targets based on the course level. The inference engine uses efficient procedures and a prediction process to determine the correct target and score, providing a reliable and understandable methodology for reasoning about the information in the knowledge base and formulating conclusions. SPA is a highly responsive and intelligent system that can be a valuable tool for measuring students\' achievements. Its characteristics include high performance, reliability, and intelligibility, and its combination of cognitive systems and cognitive theory has led to remarkable progress in measuring student performance. Limitations include dependency on accurate course content and initial setup time, potential bias in CLO weight assignments, challenges in integrating SPA with existing institutional databases, the need for continuous updates to the knowledge base to reflect curriculum changes, and potential resistance from educators to adopt new technologies. Future improvements could involve adaptive learning integrations, enhanced user interfaces, and broader applicability across diverse educational settings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:通过机器学习从医疗数据中检索可理解的基于规则的知识是一项有益的任务,例如,用于自动化创建决策支持系统的过程。虽然最近已经通过容忍异常的分层知识库(即,知识库,其中基于规则的知识在几个抽象级别上表示),在这方面,隐私问题尚未得到广泛解决。然而,隐私起着重要的作用,尤其是医疗应用。
    方法:当原始数据集的一部分可以从学习的知识库中恢复时,个人可能存在重新识别的实际和法律相关风险。在本文中,我们研究了从数据中学习的容忍异常分层知识库的隐私问题。我们提出了确定和消除学习知识库的隐私问题的方法。
    结果:我们提供了合成和真实世界数据集的结果。
    结论:结果表明,我们的方法有效地防止了隐私泄露,同时仅适度降低了推理质量。
    BACKGROUND: Retrieving comprehensible rule-based knowledge from medical data by machine learning is a beneficial task, e.g., for automating the process of creating a decision support system. While this has recently been studied by means of exception-tolerant hierarchical knowledge bases (i.e., knowledge bases, where rule-based knowledge is represented on several levels of abstraction), privacy concerns have not been addressed extensively in this context yet. However, privacy plays an important role, especially for medical applications.
    METHODS: When parts of the original dataset can be restored from a learned knowledge base, there may be a practically and legally relevant risk of re-identification for individuals. In this paper, we study privacy issues of exception-tolerant hierarchical knowledge bases which are learned from data. We propose approaches for determining and eliminating privacy issues of the learned knowledge bases.
    RESULTS: We present results for synthetic as well as for real world datasets.
    CONCLUSIONS: The results show that our approach effectively prevents privacy breaches while only moderately decreasing the inference quality.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    依靠我们在开发血液系统恶性肿瘤患者的临床和生物学数据的数据注册和管理系统方面的经验,以及设计数据收集和分析策略以支持多中心,临床关联研究,我们设计了一个框架,用于将临床相关的现实世界数据标准化收集和转化为证据,以应对在日常临床实践中收集生物医学数据以促进基础和临床研究的挑战。
    Relying on our experience on the development of data registration and management systems for clinical and biological data coming from patients with hematological malignancies, as well as on the design of strategies for data collection and analysis to support multi-center, clinical association studies, we designed a framework for the standardized collection and transformation of clinically relevant real-world data into evidence, to meet the challenges of gathering biomedical data collected during daily clinical practice in order to promote basic and clinical research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本文介绍了基于国际统一医学语言系统(UMLS)的本体结构构建国家统一术语系统(NUTS)的经验。UMLS已通过国家目录中的配方进行了调整和丰富,关系,从科学文章和电子健康记录的文本中提取,和权重系数。
    This article presents experience in construction the National Unified Terminological System (NUTS) with an ontological structure based on international Unified Medical Language System (UMLS). UMLS has been adapted and enriched with formulations from national directories, relationships, extracted from the texts of scientific articles and electronic health records, and weight coefficients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本文介绍了我们在开发可用于临床决策支持系统(CDSS)创建的本体论模型方面的经验。我们使用了最大的国际生物医学术语词库统一医学语言系统(UMLS)作为我们模型的基础。使用具有专家控制的自动混合翻译系统,该metathesaurus已改编为俄语。我们创建的产品被命名为国家统一术语系统(NUTS)。我们在NUTS术语之间增加了超过3300万个科学和临床关系,从科学文章和电子健康记录的文本中提取。我们还计算了每个关系的权重,标准化他们的价值,并在此基础上创建症状检查程序进行初步诊断。我们期望,NUTS允许解决命名实体识别(NER)的任务,并增加不同CDSS中术语的互操作性。
    This article presents our experience in development an ontological model can be used in clinical decision support systems (CDSS) creating. We have used the largest international biomedical terminological metathesaurus the Unified Medical Language System (UMLS) as the basis of our model. This metathesaurus has been adapted into Russian using an automated hybrid translation system with expert control. The product we have created was named as the National Unified Terminological System (NUTS). We have added more than 33 million scientific and clinical relationships between NUTS terms, extracted from the texts of scientific articles and electronic health records. We have also computed weights for each relationship, standardized their values and created symptom checker in preliminary diagnostics based on this. We expect, that the NUTS allow solving task of named entity recognition (NER) and increasing terms interoperability in different CDSS.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:术中神经生理监测(IOM)在提高神经外科手术期间患者的安全性方面起着关键作用。这项至关重要的技术涉及对诱发电位的连续测量,以提供早期警报并确保保留关键的神经结构。主要挑战之一是有效记录具有语义丰富特征的IOM活动。本研究旨在通过开发基于本体的工具来解决这一挑战。
    方法:我们将IOM文档本体(IOMDO)和相关工具的开发分为三个不同的阶段。初始阶段侧重于本体的创建,借鉴OBO(开放生物和生物医学本体论)原理。随后的阶段涉及敏捷软件开发,一种灵活的方法来封装不同的需求并迅速生成原型。最后一个阶段需要在现实世界的文档设置中进行实际评估。这个关键阶段使我们能够收集第一手的见解,评估工具的功能和功效。在此阶段进行的观察形成了必要调整的基础,以确保工具的生产利用。
    结果:本体论的核心实体围绕IOM的中心方面,包括以时间戳为特征的测量,type,值,和位置。几个本体论的概念和术语被整合到IOMDO中,例如,解剖学基础模型(FMA),与一般外科术语相关的人类表型本体论(HPO)和外科手术过程模型本体论(OntoSPM)。为扩展本体和相关知识库而开发的软件工具是使用JavaFX构建的,用于用户友好的前端,使用ApacheJena构建的,用于强大的后端。该工具的评估涉及测试用户,他们一致发现界面可访问和可用,即使是那些没有广泛技术专长的人。
    结论:通过建立结构化和标准化的框架来表征IOM事件,我们基于本体的工具具有提高文档质量的潜力,通过改善知情决策的基础,使患者护理受益。此外,研究人员可以利用语义丰富的数据来识别趋势,模式,以及加强外科实践的领域。要通过基于本体的方法优化文档,解决与不良事件本体相关的潜在建模问题至关重要。
    BACKGROUND: Intraoperative neurophysiological monitoring (IOM) plays a pivotal role in enhancing patient safety during neurosurgical procedures. This vital technique involves the continuous measurement of evoked potentials to provide early warnings and ensure the preservation of critical neural structures. One of the primary challenges has been the effective documentation of IOM events with semantically enriched characterizations. This study aimed to address this challenge by developing an ontology-based tool.
    METHODS: We structured the development of the IOM Documentation Ontology (IOMDO) and the associated tool into three distinct phases. The initial phase focused on the ontology\'s creation, drawing from the OBO (Open Biological and Biomedical Ontology) principles. The subsequent phase involved agile software development, a flexible approach to encapsulate the diverse requirements and swiftly produce a prototype. The last phase entailed practical evaluation within real-world documentation settings. This crucial stage enabled us to gather firsthand insights, assessing the tool\'s functionality and efficacy. The observations made during this phase formed the basis for essential adjustments to ensure the tool\'s productive utilization.
    RESULTS: The core entities of the ontology revolve around central aspects of IOM, including measurements characterized by timestamp, type, values, and location. Concepts and terms of several ontologies were integrated into IOMDO, e.g., the Foundation Model of Anatomy (FMA), the Human Phenotype Ontology (HPO) and the ontology for surgical process models (OntoSPM) related to general surgical terms. The software tool developed for extending the ontology and the associated knowledge base was built with JavaFX for the user-friendly frontend and Apache Jena for the robust backend. The tool\'s evaluation involved test users who unanimously found the interface accessible and usable, even for those without extensive technical expertise.
    CONCLUSIONS: Through the establishment of a structured and standardized framework for characterizing IOM events, our ontology-based tool holds the potential to enhance the quality of documentation, benefiting patient care by improving the foundation for informed decision-making. Furthermore, researchers can leverage the semantically enriched data to identify trends, patterns, and areas for surgical practice enhancement. To optimize documentation through ontology-based approaches, it\'s crucial to address potential modeling issues that are associated with the Ontology of Adverse Events.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目标:将日本医药产品的信息与全球知识库(KB)联系起来将加强国际合作研究,并产生有价值的见解。然而,公众对使用国际受控词汇的日本医药产品的映射的访问仍然有限。这项研究将YJ代码映射到RxNorm成分类别,通过使用案例研究方法比较日本和国际药物-药物相互作用(DDI)信息,提供新的见解。
    方法:使用京都基因和基因组百科全书和国家医学图书馆的应用编程接口创建了将YJ代码链接到RxNorm概念的表。因此,通过链接到国际DDIKB,对日本和国际DDI信息进行了比较分析。
    结果:日本和国际DDI严重程度分类之间的一致性有限。按严重程度对日本和国际DDI进行的交叉制表显示,日本DDI信息中缺少213种被国际KB分类为严重DDI的组合。
    结论:需要努力标准化DDI的国际标准,以确保其严重程度分类的一致性。
    结论:DDI严重程度的分类仍然高度可变。必须扩大关键DDI信息的存储库,这将重新验证促进与全球KB合作的效用。
    OBJECTIVE: Linking information on Japanese pharmaceutical products to global knowledge bases (KBs) would enhance international collaborative research and yield valuable insights. However, public access to mappings of Japanese pharmaceutical products that use international controlled vocabularies remains limited. This study mapped YJ codes to RxNorm ingredient classes, providing new insights by comparing Japanese and international drug-drug interaction (DDI) information using a case study methodology.
    METHODS: Tables linking YJ codes to RxNorm concepts were created using the application programming interfaces of the Kyoto Encyclopedia of Genes and Genomes and the National Library of Medicine. A comparative analysis of Japanese and international DDI information was thus performed by linking to an international DDI KB.
    RESULTS: There was limited agreement between the Japanese and international DDI severity classifications. Cross-tabulation of Japanese and international DDIs by severity showed that 213 combinations classified as serious DDIs by an international KB were missing from the Japanese DDI information.
    CONCLUSIONS: It is desirable that efforts be undertaken to standardize international criteria for DDIs to ensure consistency in the classification of their severity.
    CONCLUSIONS: The classification of DDI severity remains highly variable. It is imperative to augment the repository of critical DDI information, which would revalidate the utility of fostering collaborations with global KBs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:随着全球人口的老龄化和神经退行性疾病的易感性,阿尔茨海默病(AD)的新疗法迫切需要。用于药物发现和再利用的现有数据资源无法捕获疾病病因和对药物反应的核心关系。
    目的:我们设计了阿尔茨海默氏症知识库(Alzheimer’sKnowledgeBase,Alzheimer’sKnowledgeBase,通过提供AD病因和候选治疗的全面知识表示来缓解这种需求。
    方法:我们将AlzKB设计为大型,异构图知识库使用22种不同的外部数据源组装,描述不同组织级别的生物和制药实体(例如,化学品,基因,解剖学,和疾病)。AlzKB使用Web本体语言2本体来实施语义一致性并允许本体推理。我们提供AlzKB的公共版本,并允许用户运行和修改知识库的本地版本。
    结果:AlzKB可在网上免费获得,目前包含118,902个实体,这些实体之间有1,309,527个关系。为了证明它的价值,我们使用图形数据科学和机器学习(1)基于AD与帕金森病的相似性提出新的治疗靶点,以及(2)重新调整可能治疗AD的现有药物的用途.对于每个用例,AlzKB恢复了已知的治疗关联,同时提出了生物学上合理的新关联。
    结论:AlzKB是一种新的,公开可用的知识资源,使研究人员能够发现AD药物发现的复杂翻译关联。通过两个用例,我们表明,它是一个有价值的工具,提出新的治疗假设的基础上,公共生物医学知识。
    BACKGROUND: As global populations age and become susceptible to neurodegenerative illnesses, new therapies for Alzheimer disease (AD) are urgently needed. Existing data resources for drug discovery and repurposing fail to capture relationships central to the disease\'s etiology and response to drugs.
    OBJECTIVE: We designed the Alzheimer\'s Knowledge Base (AlzKB) to alleviate this need by providing a comprehensive knowledge representation of AD etiology and candidate therapeutics.
    METHODS: We designed the AlzKB as a large, heterogeneous graph knowledge base assembled using 22 diverse external data sources describing biological and pharmaceutical entities at different levels of organization (eg, chemicals, genes, anatomy, and diseases). AlzKB uses a Web Ontology Language 2 ontology to enforce semantic consistency and allow for ontological inference. We provide a public version of AlzKB and allow users to run and modify local versions of the knowledge base.
    RESULTS: AlzKB is freely available on the web and currently contains 118,902 entities with 1,309,527 relationships between those entities. To demonstrate its value, we used graph data science and machine learning to (1) propose new therapeutic targets based on similarities of AD to Parkinson disease and (2) repurpose existing drugs that may treat AD. For each use case, AlzKB recovers known therapeutic associations while proposing biologically plausible new ones.
    CONCLUSIONS: AlzKB is a new, publicly available knowledge resource that enables researchers to discover complex translational associations for AD drug discovery. Through 2 use cases, we show that it is a valuable tool for proposing novel therapeutic hypotheses based on public biomedical knowledge.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:生物医学文献的迅速发展需要自动化技术来辨别生物医学概念与广泛的自由文本之间的关系。这种技术促进了详细知识库的开发并突出了研究缺陷。LitCoin自然语言处理(NLP)挑战由国家促进转化科学中心组织,旨在评估这种潜力,并为方法论开发和基准测试提供手动注释的语料库。
    方法:对于命名实体识别(NER)任务,我们利用集成学习来合并来自三个特定领域模型的预测,也就是Biobert,PubMedBERT,和BioM-ELECTRA,设计了一种规则驱动的细胞系和分类名称检测方法,并注释了70多个摘要作为附加语料库。我们进一步修正了T0pp模型,有110亿个参数,为了提高关系提取和利用实体位置信息的性能(例如,title,背景)以增强关系提取(RE)中的新颖性预测性能。
    结果:我们为这一挑战而设计的开创性NLP系统确保了I-NER阶段的第一名和II阶段的第二名-关系提取和新颖性预测,超过200支队伍。我们使用相同的测试集在零镜头设置中测试了OpenAIChatGPT3.5和ChatGPT4,揭示了我们的精细模型大大超过了这些广谱大型语言模型。
    结论:我们的结果描绘了一个强大的NLP系统,该系统在各种生物医学实体的NER和RE方面表现出色,强调特定任务的模型仍然优于一般的大型模型。这些见解对于生物医学研究中的知识图开发和假设制定等工作很有价值。
    OBJECTIVE: The rapid expansion of biomedical literature necessitates automated techniques to discern relationships between biomedical concepts from extensive free text. Such techniques facilitate the development of detailed knowledge bases and highlight research deficiencies. The LitCoin Natural Language Processing (NLP) challenge, organized by the National Center for Advancing Translational Science, aims to evaluate such potential and provides a manually annotated corpus for methodology development and benchmarking.
    METHODS: For the named entity recognition (NER) task, we utilized ensemble learning to merge predictions from three domain-specific models, namely BioBERT, PubMedBERT, and BioM-ELECTRA, devised a rule-driven detection method for cell line and taxonomy names and annotated 70 more abstracts as additional corpus. We further finetuned the T0pp model, with 11 billion parameters, to boost the performance on relation extraction and leveraged entites\' location information (eg, title, background) to enhance novelty prediction performance in relation extraction (RE).
    RESULTS: Our pioneering NLP system designed for this challenge secured first place in Phase I-NER and second place in Phase II-relation extraction and novelty prediction, outpacing over 200 teams. We tested OpenAI ChatGPT 3.5 and ChatGPT 4 in a Zero-Shot setting using the same test set, revealing that our finetuned model considerably surpasses these broad-spectrum large language models.
    CONCLUSIONS: Our outcomes depict a robust NLP system excelling in NER and RE across various biomedical entities, emphasizing that task-specific models remain superior to generic large ones. Such insights are valuable for endeavors like knowledge graph development and hypothesis formulation in biomedical research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号