knowledge base

知识库
  • 文章类型: Journal Article
    背景:术中神经生理监测(IOM)在提高神经外科手术期间患者的安全性方面起着关键作用。这项至关重要的技术涉及对诱发电位的连续测量,以提供早期警报并确保保留关键的神经结构。主要挑战之一是有效记录具有语义丰富特征的IOM活动。本研究旨在通过开发基于本体的工具来解决这一挑战。
    方法:我们将IOM文档本体(IOMDO)和相关工具的开发分为三个不同的阶段。初始阶段侧重于本体的创建,借鉴OBO(开放生物和生物医学本体论)原理。随后的阶段涉及敏捷软件开发,一种灵活的方法来封装不同的需求并迅速生成原型。最后一个阶段需要在现实世界的文档设置中进行实际评估。这个关键阶段使我们能够收集第一手的见解,评估工具的功能和功效。在此阶段进行的观察形成了必要调整的基础,以确保工具的生产利用。
    结果:本体论的核心实体围绕IOM的中心方面,包括以时间戳为特征的测量,type,值,和位置。几个本体论的概念和术语被整合到IOMDO中,例如,解剖学基础模型(FMA),与一般外科术语相关的人类表型本体论(HPO)和外科手术过程模型本体论(OntoSPM)。为扩展本体和相关知识库而开发的软件工具是使用JavaFX构建的,用于用户友好的前端,使用ApacheJena构建的,用于强大的后端。该工具的评估涉及测试用户,他们一致发现界面可访问和可用,即使是那些没有广泛技术专长的人。
    结论:通过建立结构化和标准化的框架来表征IOM事件,我们基于本体的工具具有提高文档质量的潜力,通过改善知情决策的基础,使患者护理受益。此外,研究人员可以利用语义丰富的数据来识别趋势,模式,以及加强外科实践的领域。要通过基于本体的方法优化文档,解决与不良事件本体相关的潜在建模问题至关重要。
    BACKGROUND: Intraoperative neurophysiological monitoring (IOM) plays a pivotal role in enhancing patient safety during neurosurgical procedures. This vital technique involves the continuous measurement of evoked potentials to provide early warnings and ensure the preservation of critical neural structures. One of the primary challenges has been the effective documentation of IOM events with semantically enriched characterizations. This study aimed to address this challenge by developing an ontology-based tool.
    METHODS: We structured the development of the IOM Documentation Ontology (IOMDO) and the associated tool into three distinct phases. The initial phase focused on the ontology\'s creation, drawing from the OBO (Open Biological and Biomedical Ontology) principles. The subsequent phase involved agile software development, a flexible approach to encapsulate the diverse requirements and swiftly produce a prototype. The last phase entailed practical evaluation within real-world documentation settings. This crucial stage enabled us to gather firsthand insights, assessing the tool\'s functionality and efficacy. The observations made during this phase formed the basis for essential adjustments to ensure the tool\'s productive utilization.
    RESULTS: The core entities of the ontology revolve around central aspects of IOM, including measurements characterized by timestamp, type, values, and location. Concepts and terms of several ontologies were integrated into IOMDO, e.g., the Foundation Model of Anatomy (FMA), the Human Phenotype Ontology (HPO) and the ontology for surgical process models (OntoSPM) related to general surgical terms. The software tool developed for extending the ontology and the associated knowledge base was built with JavaFX for the user-friendly frontend and Apache Jena for the robust backend. The tool\'s evaluation involved test users who unanimously found the interface accessible and usable, even for those without extensive technical expertise.
    CONCLUSIONS: Through the establishment of a structured and standardized framework for characterizing IOM events, our ontology-based tool holds the potential to enhance the quality of documentation, benefiting patient care by improving the foundation for informed decision-making. Furthermore, researchers can leverage the semantically enriched data to identify trends, patterns, and areas for surgical practice enhancement. To optimize documentation through ontology-based approaches, it\'s crucial to address potential modeling issues that are associated with the Ontology of Adverse Events.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目标:将日本医药产品的信息与全球知识库(KB)联系起来将加强国际合作研究,并产生有价值的见解。然而,公众对使用国际受控词汇的日本医药产品的映射的访问仍然有限。这项研究将YJ代码映射到RxNorm成分类别,通过使用案例研究方法比较日本和国际药物-药物相互作用(DDI)信息,提供新的见解。
    方法:使用京都基因和基因组百科全书和国家医学图书馆的应用编程接口创建了将YJ代码链接到RxNorm概念的表。因此,通过链接到国际DDIKB,对日本和国际DDI信息进行了比较分析。
    结果:日本和国际DDI严重程度分类之间的一致性有限。按严重程度对日本和国际DDI进行的交叉制表显示,日本DDI信息中缺少213种被国际KB分类为严重DDI的组合。
    结论:需要努力标准化DDI的国际标准,以确保其严重程度分类的一致性。
    结论:DDI严重程度的分类仍然高度可变。必须扩大关键DDI信息的存储库,这将重新验证促进与全球KB合作的效用。
    OBJECTIVE: Linking information on Japanese pharmaceutical products to global knowledge bases (KBs) would enhance international collaborative research and yield valuable insights. However, public access to mappings of Japanese pharmaceutical products that use international controlled vocabularies remains limited. This study mapped YJ codes to RxNorm ingredient classes, providing new insights by comparing Japanese and international drug-drug interaction (DDI) information using a case study methodology.
    METHODS: Tables linking YJ codes to RxNorm concepts were created using the application programming interfaces of the Kyoto Encyclopedia of Genes and Genomes and the National Library of Medicine. A comparative analysis of Japanese and international DDI information was thus performed by linking to an international DDI KB.
    RESULTS: There was limited agreement between the Japanese and international DDI severity classifications. Cross-tabulation of Japanese and international DDIs by severity showed that 213 combinations classified as serious DDIs by an international KB were missing from the Japanese DDI information.
    CONCLUSIONS: It is desirable that efforts be undertaken to standardize international criteria for DDIs to ensure consistency in the classification of their severity.
    CONCLUSIONS: The classification of DDI severity remains highly variable. It is imperative to augment the repository of critical DDI information, which would revalidate the utility of fostering collaborations with global KBs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:随着全球人口的老龄化和神经退行性疾病的易感性,阿尔茨海默病(AD)的新疗法迫切需要。用于药物发现和再利用的现有数据资源无法捕获疾病病因和对药物反应的核心关系。
    目的:我们设计了阿尔茨海默氏症知识库(Alzheimer’sKnowledgeBase,Alzheimer’sKnowledgeBase,通过提供AD病因和候选治疗的全面知识表示来缓解这种需求。
    方法:我们将AlzKB设计为大型,异构图知识库使用22种不同的外部数据源组装,描述不同组织级别的生物和制药实体(例如,化学品,基因,解剖学,和疾病)。AlzKB使用Web本体语言2本体来实施语义一致性并允许本体推理。我们提供AlzKB的公共版本,并允许用户运行和修改知识库的本地版本。
    结果:AlzKB可在网上免费获得,目前包含118,902个实体,这些实体之间有1,309,527个关系。为了证明它的价值,我们使用图形数据科学和机器学习(1)基于AD与帕金森病的相似性提出新的治疗靶点,以及(2)重新调整可能治疗AD的现有药物的用途.对于每个用例,AlzKB恢复了已知的治疗关联,同时提出了生物学上合理的新关联。
    结论:AlzKB是一种新的,公开可用的知识资源,使研究人员能够发现AD药物发现的复杂翻译关联。通过两个用例,我们表明,它是一个有价值的工具,提出新的治疗假设的基础上,公共生物医学知识。
    BACKGROUND: As global populations age and become susceptible to neurodegenerative illnesses, new therapies for Alzheimer disease (AD) are urgently needed. Existing data resources for drug discovery and repurposing fail to capture relationships central to the disease\'s etiology and response to drugs.
    OBJECTIVE: We designed the Alzheimer\'s Knowledge Base (AlzKB) to alleviate this need by providing a comprehensive knowledge representation of AD etiology and candidate therapeutics.
    METHODS: We designed the AlzKB as a large, heterogeneous graph knowledge base assembled using 22 diverse external data sources describing biological and pharmaceutical entities at different levels of organization (eg, chemicals, genes, anatomy, and diseases). AlzKB uses a Web Ontology Language 2 ontology to enforce semantic consistency and allow for ontological inference. We provide a public version of AlzKB and allow users to run and modify local versions of the knowledge base.
    RESULTS: AlzKB is freely available on the web and currently contains 118,902 entities with 1,309,527 relationships between those entities. To demonstrate its value, we used graph data science and machine learning to (1) propose new therapeutic targets based on similarities of AD to Parkinson disease and (2) repurpose existing drugs that may treat AD. For each use case, AlzKB recovers known therapeutic associations while proposing biologically plausible new ones.
    CONCLUSIONS: AlzKB is a new, publicly available knowledge resource that enables researchers to discover complex translational associations for AD drug discovery. Through 2 use cases, we show that it is a valuable tool for proposing novel therapeutic hypotheses based on public biomedical knowledge.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:生物医学文献的迅速发展需要自动化技术来辨别生物医学概念与广泛的自由文本之间的关系。这种技术促进了详细知识库的开发并突出了研究缺陷。LitCoin自然语言处理(NLP)挑战由国家促进转化科学中心组织,旨在评估这种潜力,并为方法论开发和基准测试提供手动注释的语料库。
    方法:对于命名实体识别(NER)任务,我们利用集成学习来合并来自三个特定领域模型的预测,也就是Biobert,PubMedBERT,和BioM-ELECTRA,设计了一种规则驱动的细胞系和分类名称检测方法,并注释了70多个摘要作为附加语料库。我们进一步修正了T0pp模型,有110亿个参数,为了提高关系提取和利用实体位置信息的性能(例如,title,背景)以增强关系提取(RE)中的新颖性预测性能。
    结果:我们为这一挑战而设计的开创性NLP系统确保了I-NER阶段的第一名和II阶段的第二名-关系提取和新颖性预测,超过200支队伍。我们使用相同的测试集在零镜头设置中测试了OpenAIChatGPT3.5和ChatGPT4,揭示了我们的精细模型大大超过了这些广谱大型语言模型。
    结论:我们的结果描绘了一个强大的NLP系统,该系统在各种生物医学实体的NER和RE方面表现出色,强调特定任务的模型仍然优于一般的大型模型。这些见解对于生物医学研究中的知识图开发和假设制定等工作很有价值。
    OBJECTIVE: The rapid expansion of biomedical literature necessitates automated techniques to discern relationships between biomedical concepts from extensive free text. Such techniques facilitate the development of detailed knowledge bases and highlight research deficiencies. The LitCoin Natural Language Processing (NLP) challenge, organized by the National Center for Advancing Translational Science, aims to evaluate such potential and provides a manually annotated corpus for methodology development and benchmarking.
    METHODS: For the named entity recognition (NER) task, we utilized ensemble learning to merge predictions from three domain-specific models, namely BioBERT, PubMedBERT, and BioM-ELECTRA, devised a rule-driven detection method for cell line and taxonomy names and annotated 70 more abstracts as additional corpus. We further finetuned the T0pp model, with 11 billion parameters, to boost the performance on relation extraction and leveraged entites\' location information (eg, title, background) to enhance novelty prediction performance in relation extraction (RE).
    RESULTS: Our pioneering NLP system designed for this challenge secured first place in Phase I-NER and second place in Phase II-relation extraction and novelty prediction, outpacing over 200 teams. We tested OpenAI ChatGPT 3.5 and ChatGPT 4 in a Zero-Shot setting using the same test set, revealing that our finetuned model considerably surpasses these broad-spectrum large language models.
    CONCLUSIONS: Our outcomes depict a robust NLP system excelling in NER and RE across various biomedical entities, emphasizing that task-specific models remain superior to generic large ones. Such insights are valuable for endeavors like knowledge graph development and hypothesis formulation in biomedical research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    胶原是细胞外基质(ECM)的关键组分。在ECM的重塑中,胶原蛋白翻译后修饰(PTM)发生了显着变化。这使得胶原成为在病理状况期间理解细胞外基质重塑的潜在靶标。多年来,科学家们已经收集了大量的关于细胞外基质重塑过程中的胶原PTM的数据。为了使此类信息在合并的空间中易于访问,我们开发了ColPTMScape(https://colptmscape。iitmandi.AC.in/),胶原蛋白PTM的专用知识库。确定的位点特异性PTM,定量的PTM位点,胶原蛋白链的PTM图谱是科学界的可交付成果,尤其是矩阵生物学家。通过这个知识库,用户可以轻松地获得与不同生物体中不同组织的胶原蛋白PTM差异相关的信息。
    Collagen is a key component of the extracellular matrix (ECM). In the remodeling of ECM, a remarkable variation in collagen post-translational modifications (PTMs) occurs. This makes collagen a potential target for understanding extracellular matrix remodeling during pathological conditions. Over the years, scientists have gathered a huge amount of data about collagen PTM during extracellular matrix remodeling. To make such information easily accessible in a consolidated space, we have developed ColPTMScape (https://colptmscape.iitmandi.ac.in/), a dedicated knowledge base for collagen PTMs. The identified site-specific PTMs, quantitated PTM sites, and PTM maps of collagen chains are deliverables to the scientific community, especially to matrix biologists. Through this knowledge base, users can easily gain information related to the difference in the collagen PTMs across different tissues in different organisms.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    肺结核(PTB)是由称为结核分枝杆菌的细菌引起的传染病。本文旨在创建符号人工智能(SAI)系统,以使用临床和临床数据诊断PTB。通常,自动PTB诊断基于微生物测试或肺部X射线。由于与肺部其他疾病的相似性,准确识别PTB具有挑战性。仅X射线不足以诊断PTB。因此,实现一个能够基于所有参数旁数据进行诊断的系统是至关重要的。因此,我们在本文中提出了一种新的PTB本体,该本体存储了所有临床旁测试和临床症状。我们的SAI系统包括领域本体和具有性能指标的知识库,并提出了诊断当前和未来PTB异常患者的解决方案。我们的方法基于我们在印度本地治里医院的合作者超过四年的真实数据库。
    Pulmonary Tuberculosis (PTB) is an infectious disease caused by a bacterium called Mycobacterium tuberculosis. This paper aims to create Symbolic Artificial Intelligence (SAI) system to diagnose PTB using clinical and paraclinical data. Usually, the automatic PTB diagnosis is based on either microbiological tests or lung X-rays. It is challenging to identify PTB accurately due to similarities with other diseases in the lungs. X-ray alone is not sufficient to diagnose PTB. Therefore, it is crucial to implement a system that can diagnose based on all paraclinical data. Thus, we propose in this paper a new PTB ontology that stores all paraclinical tests and clinical symptoms. Our SAI system includes domain ontology and a knowledge base with performance indicators and proposes a solution to diagnose current and future PTB also abnormal patients. Our approach is based on a real database of more than four years from our collaborators at Pondicherry hospital in India.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Hippocampome.org是一个成熟的开放获取的啮齿动物海马形成知识库,专注于神经元类型及其特性。以前,Hippocancome.orgv1.0建立了一个基础分类系统,根据其轴突和树突形态识别122海马神经元类型,主要神经递质,膜生物物理学,和分子表达(Wheeler等人。,2015).v1.1到v1.12版本进一步促进了文献挖掘数据的聚合,其中包括神经元计数,尖峰图案,突触生理学,在体内放电阶段,和连接概率。这些额外的属性增加了这个公共资源的在线信息内容超过100倍,使科学界有许多独立的发现。Hippocencome.orgv2.0,在这里介绍,除了包含超过50种新的神经元类型,现在,它专注于扩展功能以构建真实规模,生物学详细,数据驱动的计算模拟。在所有情况下,可自由下载的模型参数与从中得出的特定同行评审经验证据直接相关.可能的研究应用包括定量,电路连通性的多尺度分析和活动动力学的尖峰神经网络模拟。这些进步可以帮助产生精确的,实验可检验的假设,并阐明了联想记忆和空间导航的神经机制。
    Hippocampome.org is a mature open-access knowledge base of the rodent hippocampal formation focusing on neuron types and their properties. Previously, Hippocampome.org v1.0 established a foundational classification system identifying 122 hippocampal neuron types based on their axonal and dendritic morphologies, main neurotransmitter, membrane biophysics, and molecular expression (Wheeler et al., 2015). Releases v1.1 through v1.12 furthered the aggregation of literature-mined data, including among others neuron counts, spiking patterns, synaptic physiology, in vivo firing phases, and connection probabilities. Those additional properties increased the online information content of this public resource over 100-fold, enabling numerous independent discoveries by the scientific community. Hippocampome.org v2.0, introduced here, besides incorporating over 50 new neuron types, now recenters its focus on extending the functionality to build real-scale, biologically detailed, data-driven computational simulations. In all cases, the freely downloadable model parameters are directly linked to the specific peer-reviewed empirical evidence from which they were derived. Possible research applications include quantitative, multiscale analyses of circuit connectivity and spiking neural network simulations of activity dynamics. These advances can help generate precise, experimentally testable hypotheses and shed light on the neural mechanisms underlying associative memory and spatial navigation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:生物医学知识库(KB)领域的最新发展开辟了利用以KB形式提供的生物医学知识的新途径。在生物医学KB创建和KB完成的方向上已经做了大量工作,具体来说,那些有基因-疾病关联和其他相关实体的人。然而,使用这种生物医学KBs与患者的时间临床数据在很大程度上仍未探索,但有可能极大地受益于医疗诊断决策支持系统。
    结果:我们提出了两种新算法,LOADDx和SCADDx,将患者的基因表达数据与基因-疾病关联和其他相关信息以KB的形式结合起来,协助个性化疾病诊断。我们已经在两个KB和由19种亚型的流感样病毒引起的呼吸道病毒感染的四个真实世界基因表达数据集上测试了这两种算法。我们还将提出的算法的性能与五种现有的最先进的机器学习算法(k-NN,随机森林,XGBoost,线性SVM,和具有RBF内核的SVM)使用两种验证方法:LOOCV和单个内部验证集。当使用两种验证方法进行评估时,SCADDx和LOADDx的性能均优于现有算法。在数据集2和3的情况下,SCADDx能够以高达100%的准确性检测感染。总的来说,考虑到所有四个数据集,SCADDx和LOADDx能够在感染后72小时内检测到感染,平均准确率分别为91.38%和92.66%。而XGBoost,在现有的机器学习算法中表现最好,可以检测到感染,平均准确率仅为86.43%。
    结论:我们证明了我们如何将差异表达最多和最少的基因与KB结合使用的新想法可以识别患者在特定时间最有可能患有的疾病。来自数千种疾病的KB。此外,所提出的算法可以为每个患者提供最可能的疾病以及他们最受影响的基因的简短排名列表,以及在KB中与它们链接的其他实体,这可以支持卫生保健专业人员的决策。
    BACKGROUND: Recent developments in the domain of biomedical knowledge bases (KBs) open up new ways to exploit biomedical knowledge that is available in the form of KBs. Significant work has been done in the direction of biomedical KB creation and KB completion, specifically, those having gene-disease associations and other related entities. However, the use of such biomedical KBs in combination with patients\' temporal clinical data still largely remains unexplored, but has the potential to immensely benefit medical diagnostic decision support systems.
    RESULTS: We propose two new algorithms, LOADDx and SCADDx, to combine a patient\'s gene expression data with gene-disease association and other related information available in the form of a KB, to assist personalized disease diagnosis. We have tested both of the algorithms on two KBs and on four real-world gene expression datasets of respiratory viral infection caused by Influenza-like viruses of 19 subtypes. We also compare the performance of proposed algorithms with that of five existing state-of-the-art machine learning algorithms (k-NN, Random Forest, XGBoost, Linear SVM, and SVM with RBF Kernel) using two validation approaches: LOOCV and a single internal validation set. Both SCADDx and LOADDx outperform the existing algorithms when evaluated with both validation approaches. SCADDx is able to detect infections with up to 100% accuracy in the cases of Datasets 2 and 3. Overall, SCADDx and LOADDx are able to detect an infection within 72 h of infection with 91.38% and 92.66% average accuracy respectively considering all four datasets, whereas XGBoost, which performed best among the existing machine learning algorithms, can detect the infection with only 86.43% accuracy on an average.
    CONCLUSIONS: We demonstrate how our novel idea of using the most and least differentially expressed genes in combination with a KB can enable identification of the diseases that a patient is most likely to have at a particular time, from a KB with thousands of diseases. Moreover, the proposed algorithms can provide a short ranked list of the most likely diseases for each patient along with their most affected genes, and other entities linked with them in the KB, which can support health care professionals in their decision-making.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着深度学习技术的快速发展,这些应用在各个领域变得越来越广泛。然而,传统的深度学习方法通常被称为“黑箱”模型,其结果的可解释性较低,对它们在某些关键领域的应用提出了挑战。在这项研究中,我们提出了一种情感模型可解释性分析的综合方法。所提出的方法包括两个主要方面:基于注意力的分析和外部知识集成。首先,我们在情感分类和生成任务中训练模型,以从多个角度捕获注意力得分。这种多角度的方法减少了偏见,并提供了对潜在情绪的更全面理解。第二,我们整合了一个外部知识库来改进证据提取。通过利用角色得分,我们检索完整的情感证据短语,解决中文文本中证据提取不完全的挑战。在情感可解释性评估数据集上的实验结果表明了我们方法的有效性。我们观察到准确率显著提高了1.3%,宏F1下降13%,MAP下降23%。总的来说,我们的方法通过结合基于注意力的分析和外部知识的整合,为增强情绪模型的可解释性提供了一个稳健的解决方案.
    With the rapid development of deep learning techniques, the applications have become increasingly widespread in various domains. However, traditional deep learning methods are often referred to as \"black box\" models with low interpretability of their results, posing challenges for their application in certain critical domains. In this study, we propose a comprehensive method for the interpretability analysis of sentiment models. The proposed method encompasses two main aspects: attention-based analysis and external knowledge integration. First, we train the model within sentiment classification and generation tasks to capture attention scores from multiple perspectives. This multi-angle approach reduces bias and provides a more comprehensive understanding of the underlying sentiment. Second, we incorporate an external knowledge base to improve evidence extraction. By leveraging character scores, we retrieve complete sentiment evidence phrases, addressing the challenge of incomplete evidence extraction in Chinese texts. Experimental results on a sentiment interpretability evaluation dataset demonstrate the effectiveness of our method. We observe a notable increase in accuracy by 1.3%, Macro-F1 by 13%, and MAP by 23%. Overall, our approach offers a robust solution for enhancing the interpretability of sentiment models by combining attention-based analysis and the integration of external knowledge.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Vrikshayurveda(古印度植物生命科学)包括植物生理学的完整植物生命知识纲要,园艺,病理学,和治疗。虽然手稿的翻译是可用的,翻译中包含的知识对于想要解决特定问题的普通农民或想要参考特定主题的研究人员来说,而不必阅读完整的书,是不容易获得的。这项研究工作提出将手稿形式的知识转换为专家系统形式,该形式可以为农民和农业利益相关者的特定查询提供解决方案。使用后向链专家系统开发了基于规则的专家系统。本设计中的数据库有十种疾病。对所有数据集进行评估。结果与专家的诊断一致。因此,用户可以获得有关Vriksha-Ayurvedic疾病和植物保护所有要素的专业知识的全面信息。
    Vrikshayurveda (An ancient Indian science of plant life) includes complete plant-life knowledge compendium of plant physiology, horticulture, pathology, and treatment. Though translation of the manuscript is available, the knowledge contained in the translation is not easily accessible to ordinary farmers who want answers to their specific problems or researchers who want references for specific topics without having to read the complete book. This research work proposes to convert the knowledge in the manuscript form to an expert system form which can provide the solutions to specific queries from the farmers and agriculture stakeholders. A rule based expert system using backward chaining Expert System is developed. The database in this design has ten diseases. The evaluation is done for all the dataset. The results are compatible with the expert\'s diagnosis. Thus the users can get comprehensive information on Vriksha-Ayurvedic expertise on all elements of disease and plant protection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号