database

数据库
  • 文章类型: Journal Article
    厌氧消化(AD)已成为有机废物管理的流行技术,同时具有经济和环境优势。随着AD在世界范围内越来越普遍,研究工作主要集中在优化其流程上。在AD系统运行期间,不稳定事件的发生是不可避免的。到目前为止,关于启动扰动的驱动因素,从全面和实验室规模的研究中得出了许多结论。然而,初创企业研究报告中缺乏标准化的做法,这引起了人们对所获得数据的可比性和可靠性的担忧。本研究旨在开发一个知识数据库,并研究将机器学习技术应用于实验提取的数据以协助启动计划和监控的可能性。因此,一个标准化的数据库,引用了75个一级湿式连续搅拌釜反应器(CSTR)加工农业启动案例,工业,构建了31项研究的单消化或市政有机废水。该数据库中包含的总观察结果的10%与启动实验失败有关。然后,使用多变量分析和基于模型的排名方法研究了参数之间的相关性及其对启动持续时间的影响。通过数据库的相关性分析,突出了对选择趋势的见解。因此,发现有利于短启动持续时间的方案涉及相对较低的保留时间(平均初始和最终水力保留时间,(HRTi)和(HRTf)分别为26.25天和20.6天,分别),高平均有机负载率(平均OLR平均值为5.24gVS·d-1·L-1)和高度可发酵底物的处理(平均进料挥发性固体(VSfeed)为81.35gL-1)。基于模型的AD参数排序表明,HRTf,VSceed,目标温度(Tf)对启动持续时间的影响最强,在评估的AD参数中获得最高的相对分数。该数据库可以作为未来启动研究的比较目的的参考,从而可以确定应严格控制的因素。
    Anaerobic digestion (AD) has become a popular technique for organic waste management while offering economic and environmental advantages. As AD becomes increasingly prevalent worldwide, research efforts are primarily focused on optimizing its processes. During the operation of AD systems, the occurrence of unstable events is inevitable. So far, numerous conclusions have been drawn from full and lab-scale studies regarding the driving factors of start-up perturbations. However, the lack of standardized practices reported in start-up studies raises concerns about the comparability and reliability of obtained data. This study aims to develop a knowledge database and investigate the possibility of applying machine learning techniques on experimentation-extracted data to assist start-up planning and monitoring. Thus, a standardized database referencing 75 cases of start-up of one-stage wet continuously-stirred tank reactors (CSTR) processing agricultural, industrial, or municipal organic effluent in mono-digestion from 31 studies was constructed. 10 % of the total observations included in this database concern failed start-up experiments. Then, correlations between the parameters and their impacts on the start-up duration were studied using multivariate analysis and a model-based ranking methodology. Insights into trends of choices were highlighted through the correlation analysis of the database. As such, scenarios favoring short start-up duration were found to involve relatively low retention times (average initial and final hydraulic retention times, (HRTi) and (HRTf) of 26.25 and 20.6 days, respectively), high mean organic loading rates (average OLRmean of 5.24 g VS·d-1·L -1) and the processing of highly fermentable substrates (average feed volatile solids (VSfeed) of 81.35 g L-1). The model-based ranking of AD parameters demonstrated that the HRTf, the VSfeed, and the target temperature (Tf) have the strongest impact on the start-up duration, receiving the highest relative scores among the evaluated AD parameters. The database could serve as a reference for comparison purposes of future start-up studies allowing the identification of factors that should be closely controlled.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Meta-Analysis
    背景:数据提取(DE)是系统综述(SRs)中具有挑战性的步骤。复杂的SR可能涉及多种干预措施和/或结果,并包含多个研究问题。已经尝试澄清侧重于随后的荟萃分析的DE方面;有,然而,在复杂的SRs中没有DE指南。比较成对的审阅者独立提取的数据集以检测差异也很麻烦,特别是当提取的变量和/或研究的数量是巨大的。这项工作旨在提供一组实际步骤,以帮助SR团队设计和构建DE工具,并比较复杂SR的提取数据。
    方法:我们提供了10步指南,从确定数据项和结构到数据比较,以帮助识别差异并解决审阅者之间的数据分歧。这些步骤分为三个阶段:规划和构建数据库以及数据操作。每个步骤都用例子描述和说明,并为进一步指导提供了相关参考。给出了一个演示示例,以说明EpiInfo和R在数据库构建和数据操作阶段的应用。还总结了拟议的指南,并与以前的DE指南进行了比较。
    结果:本指南的步骤是一般性描述的,而不关注特定的软件应用程序或荟萃分析技术。我们强调确定组织数据结构,并强调其在数据库构建的后续步骤中的作用。除了所需的最低限度的编程技能,创建Epiinfo的关系数据库和数据验证功能可用于为复杂的SR构建DE工具。然而,需要两个R库,以方便数据比较和解决差异。
    结论:我们希望采用本指南可以帮助评审团队构建适合其复杂评审项目的DE工具。尽管EpiInfo依赖于专有软件进行数据存储,它仍然可以替代其他商业DE软件来完成复杂的审查。
    Data extraction (DE) is a challenging step in systematic reviews (SRs). Complex SRs can involve multiple interventions and/or outcomes and encompass multiple research questions. Attempts have been made to clarify DE aspects focusing on the subsequent meta-analysis; there are, however, no guidelines for DE in complex SRs. Comparing datasets extracted independently by pairs of reviewers to detect discrepancies is also cumbersome, especially when the number of extracted variables and/or studies is colossal. This work aims to provide a set of practical steps to help SR teams design and build DE tools and compare extracted data for complex SRs.
    We provided a 10-step guideline, from determining data items and structure to data comparison, to help identify discrepancies and solve data disagreements between reviewers. The steps were organised into three phases: planning and building the database and data manipulation. Each step was described and illustrated with examples, and relevant references were provided for further guidance. A demonstration example was presented to illustrate the application of Epi Info and R in the database building and data manipulation phases. The proposed guideline was also summarised and compared with previous DE guidelines.
    The steps of this guideline are described generally without focusing on a particular software application or meta-analysis technique. We emphasised determining the organisational data structure and highlighted its role in the subsequent steps of database building. In addition to the minimal programming skills needed, creating relational databases and data validation features of Epi info can be utilised to build DE tools for complex SRs. However, two R libraries are needed to facilitate data comparison and solve discrepancies.
    We hope adopting this guideline can help review teams construct DE tools that suit their complex review projects. Although Epi Info depends on proprietary software for data storage, it can still be a potential alternative to other commercial DE software for completing complex reviews.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    基因的标准化命名法,基因产物,同工型对于防止歧义和实现科学数据的清晰交流至关重要,促进有效的生物存储和数据共享。标准化基因型命名法,它描述了特定菌株中存在的与野生型参考菌株不同的等位基因,对于最大化研究影响并确保将基因型与表型联系起来的结果是可察觉的,可访问,互操作,可重用(FAIR)。在本出版物中,我们扩展了裂变酵母进化枝基因命名指南,以支持PomBase的策展工作(www.pombase.org),裂殖酵母模型生物数据库。此更新介绍了非编码RNA基因的命名指南,遵循人类基因组组织基因命名委员会的规定。此外,我们对最初于1987年发布的等位基因和基因型命名指南进行了重大更新,以标准化裂变酵母遗传工具箱所实现的各种遗传修饰范围。这些更新的指南反映了许多裂变酵母研究人员之间的社区共识。采用这些规则将提高基因和基因型命名法的一致性,并促进机器可读性和自动化实体识别裂变酵母基因和等位基因在出版物或数据集。总之,我们更新的指南为裂变酵母研究界提供了宝贵的资源,促进一致性,清晰度,遗传数据共享和解释中的公平。
    Standardized nomenclature for genes, gene products, and isoforms is crucial to prevent ambiguity and enable clear communication of scientific data, facilitating efficient biocuration and data sharing. Standardized genotype nomenclature, which describes alleles present in a specific strain that differ from those in the wild-type reference strain, is equally essential to maximize research impact and ensure that results linking genotypes to phenotypes are Findable, Accessible, Interoperable, and Reusable (FAIR). In this publication, we extend the fission yeast clade gene nomenclature guidelines to support the curation efforts at PomBase (www.pombase.org), the Schizosaccharomyces pombe Model Organism Database. This update introduces nomenclature guidelines for noncoding RNA genes, following those set forth by the Human Genome Organisation Gene Nomenclature Committee. Additionally, we provide a significant update to the allele and genotype nomenclature guidelines originally published in 1987, to standardize the diverse range of genetic modifications enabled by the fission yeast genetic toolbox. These updated guidelines reflect a community consensus between numerous fission yeast researchers. Adoption of these rules will improve consistency in gene and genotype nomenclature, and facilitate machine-readability and automated entity recognition of fission yeast genes and alleles in publications or datasets. In conclusion, our updated guidelines provide a valuable resource for the fission yeast research community, promoting consistency, clarity, and FAIRness in genetic data sharing and interpretation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    We sought to describe the processes undertaken for the systematic selection and consensus determination of the common data elements for inclusion in a national pediatric critical care database in Canada.
    We conducted a multicentre Delphi consensus study of Canadian pediatric intensive care units (PICUs) participating in the creation of a national database. Participants were PICU health care professionals, allied health professionals, caregivers, and other stakeholders. A dedicated panel group created a baseline survey of data elements based on literature, current PICU databases, and expertise in the field. The survey was then used for a Delphi iterative consensus process over three rounds, conducted from March to June 2021.
    Of 86 invited participants, 68 (79%) engaged and agreed to participate as part of an expert panel. Panel participants were sent three rounds of the survey with response rates of 62 (91%), 61 (90%) and 55 (81%), respectively. After three rounds, 72 data elements were included from six domains, mostly reflecting clinical status and complex medical interventions received in the PICU. While race, gender, and home region were included by consensus, variables such as minority status, indigenous status, primary language, and ethnicity were not.
    We present the methodological framework used to select data elements by consensus for a national pediatric critical care database, with participation from a diverse stakeholder group of experts and caregivers from all PICUs in Canada. The selected core data elements will provide standardized and synthesized data for research, benchmarking, and quality improvement initiatives of critically ill children.
    RéSUMé: OBJECTIF: Nous avons cherché à décrire les processus entrepris pour la sélection systématique et la détermination consensuelle des éléments de données communs à inclure dans une base de données nationale sur les soins intensifs pédiatriques au Canada. MéTHODE: Nous avons mené une étude multicentrique de consensus selon la méthode Delphi sur les unités de soins intensifs pédiatriques (USIP) canadiennes participant à la création d’une base de données nationale. Les personnes participant à l’étude étaient des professionnel·les de la santé de l’USIP, du personnel paramédical, des soignant·es et d’autres intervenant·es. Un groupe de travail spécialisé a créé une enquête de base des éléments de données sur la littérature, les bases de données actuelles portant sur les USIP et l’expertise dans le domaine. L’enquête a ensuite été utilisée pour créer un processus de consensus itératif Delphi sur trois cycles, mené de mars à juin 2021. RéSULTATS: Sur les 86 personnes invitées à participer, 68 (79 %) se sont engagées et ont accepté de participer à un groupe d’experts. Les membres du panel ont reçu trois rondes du sondage, avec des taux de réponse de 62 (91 %), 61 (90 %) et 55 (81 %), respectivement. Après trois cycles, 72 éléments de données provenant de six domaines ont été inclus, reflétant principalement l’état clinique et les interventions médicales complexes reçues à l’USIP. Alors que la race, le genre et la région d’origine ont été inclus par consensus, des variables telles que le statut de minorité, le statut d’autochtone, la langue principale parlée et l’origine ethnique ne l’ont pas été. CONCLUSION: Nous présentons le cadre méthodologique utilisé pour sélectionner des éléments de données consensuels destinés à une base de données nationale sur les soins intensifs pédiatriques, avec la participation d’un groupe diversifié d’expert·es et de soignant·es de toutes les USIP au Canada. Les éléments de données de base sélectionnés fourniront des données normalisées et synthétisées pour la recherche, l’analyse comparative et les initiatives d’amélioration de la qualité pour les enfants gravement malades.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:尽管数据重用提供了许多机会,它的实施存在许多困难,和原始数据不能直接重用。信息在源数据库中并不总是直接可用的,并且之后需要使用用于定义算法的原始数据来计算。
    目的:本文的主要目的是对进行回顾性观察研究时在特征提取过程中所需的步骤和转换进行标准化描述。次要目标是确定如何将特征存储在数据仓库的模式中。
    方法:本研究涉及以下3个主要步骤:(1)收集与特征提取相关的研究案例,并基于数据的自动和二次使用;(2)原始数据的标准化描述,steps,和转换,(3)在观察医学结果伙伴关系(OMOP)通用数据模型(CDM)中识别适当的表格以存储特征。
    结果:我们采访了来自3家法国大学医院和一个国家机构的10名研究人员,他们参与了8项回顾性和观察性研究。基于这些研究,出现了2个状态(轨道和特征)和2个转换(轨道定义和轨道聚合)。“轨道”是一个时间相关的信号或感兴趣的周期,由统计单位定义,一个值,和2个里程碑(开始事件和结束事件)。“特征”是与时间无关的高级信息,其维度与研究的统计单位相同,由标签和值定义。时间维度已隐含在变量的值或名称中。我们提出了2个表“TRACK”和“FEATURE”来存储特征提取中获得的变量,并扩展了OMOPCDM。
    结论:我们提出了对特征提取过程的标准化描述。该过程结合了轨道定义和轨道聚合的两个步骤。通过将特征提取分为这两个步骤,在轨道定义过程中管理了困难。轨道的标准化需要大量的数据专业知识,但允许应用无限数量的复杂转换。相反,轨道聚合是一个非常简单的操作,具有有限的可能性。对这些步骤的完整描述可以增强回顾性研究的可重复性。
    BACKGROUND: Despite the many opportunities data reuse offers, its implementation presents many difficulties, and raw data cannot be reused directly. Information is not always directly available in the source database and needs to be computed afterwards with raw data for defining an algorithm.
    OBJECTIVE: The main purpose of this article is to present a standardized description of the steps and transformations required during the feature extraction process when conducting retrospective observational studies. A secondary objective is to identify how the features could be stored in the schema of a data warehouse.
    METHODS: This study involved the following 3 main steps: (1) the collection of relevant study cases related to feature extraction and based on the automatic and secondary use of data; (2) the standardized description of raw data, steps, and transformations, which were common to the study cases; and (3) the identification of an appropriate table to store the features in the Observation Medical Outcomes Partnership (OMOP) common data model (CDM).
    RESULTS: We interviewed 10 researchers from 3 French university hospitals and a national institution, who were involved in 8 retrospective and observational studies. Based on these studies, 2 states (track and feature) and 2 transformations (track definition and track aggregation) emerged. \"Track\" is a time-dependent signal or period of interest, defined by a statistical unit, a value, and 2 milestones (a start event and an end event). \"Feature\" is time-independent high-level information with dimensionality identical to the statistical unit of the study, defined by a label and a value. The time dimension has become implicit in the value or name of the variable. We propose the 2 tables \"TRACK\" and \"FEATURE\" to store variables obtained in feature extraction and extend the OMOP CDM.
    CONCLUSIONS: We propose a standardized description of the feature extraction process. The process combined the 2 steps of track definition and track aggregation. By dividing the feature extraction into these 2 steps, difficulty was managed during track definition. The standardization of tracks requires great expertise with regard to the data, but allows the application of an infinite number of complex transformations. On the contrary, track aggregation is a very simple operation with a finite number of possibilities. A complete description of these steps could enhance the reproducibility of retrospective studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    我们评估了2015-2020财政年度(4月至次年3月)的血压(BP)变化,以阐明2020年冠状病毒病2019(COVID-19)大流行造成的紧急状态的影响。然后我们在2019年单独考虑了BP,随着日本高血压指南在2019年更新.本回顾性队列研究从157,510名年龄<75岁的日本人(平均年龄:50.3岁,男性:67.5%)来自DeSC数据库的年度健康检查数据。使用重复测量线性混合模型评估BP的趋势。在调整健康检查月份以排除季节性BP变化后,收缩压在2015-2018财年呈线性增长。根据2015-2018年趋势估计的值,接受治疗的参与者在2019财年收缩压降低≤1mmHg。同时,未经治疗的女性(n=43,292)的收缩压/舒张压血压(95%置信区间)增加了2.11(1.97-2.24)/1.05(0.96-1.14)mmHg,未经治疗的男性为1.60(1.51-1.70)/1.17(1.11-1.24)mmHg(n=88,479),1.92(1.60-2.23)/0.46(0.25-0.67)mmHg治疗妇女(n=7855),2020财年接受治疗的男性(n=17,884)为1.00(0.79-1.21)/0.39(0.25-0.53)mmHg。在调整年龄后,这些增加仍然是时间依赖性协变量,身体质量指数,酒精消费,吸烟,身体活动,和血液采样指标。大流行引起的社会变革可能使血压增加约1-2/0.5-1mmHg。同时,在日本,指南更新后,仅观察到BP立即有轻微下降.
    We assessed blood pressure (BP) changes during fiscal years (April to March of the following year) 2015-2020 to clarify the effect of the state of emergency due to the coronavirus disease 2019 (COVID-19) pandemic in 2020. We then considered BP in 2019 separately, as the Japanese hypertension guidelines were updated in 2019. The present retrospective cohort study extracted data from 157,510 Japanese individuals aged <75 years (mean age: 50.3 years, men: 67.5%) from the annual health check-up data of the DeSC database. The trends in BP were assessed using a repeated measures linear mixed model. After adjusting for the month of health check-ups to exclude seasonal BP variation, systolic BP linearly increased during fiscal years 2015-2018. From the value estimated by the trend in 2015-2018, systolic BP was lower by ≤1 mmHg in fiscal year 2019 among the treated participants. Meanwhile, systolic/diastolic BP (95% confidence interval) increased by 2.11 (1.97-2.24)/1.05 (0.96-1.14) mmHg for untreated women (n = 43,292), 1.60 (1.51-1.70)/1.17 (1.11-1.24) mmHg for untreated men (n = 88,479), 1.92 (1.60-2.23)/0.46 (0.25-0.67) mmHg for treated women (n = 7855), and 1.00 (0.79-1.21)/0.39 (0.25-0.53) mmHg for treated men (n = 17,884) in fiscal year 2020. These increases remained time-dependent covariates after adjustments for age, body mass index, alcohol consumption, smoking, physical activity, and blood sampling indices. Social change due to the pandemic might have increased BP by approximately 1-2/0.5-1 mmHg. Meanwhile, only a slight decrease in BP was observed immediately after the guideline update in Japan.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    以诊所为基础的头痛登记处收集各种目的的数据,包括描绘疾病特征,纵向自然疾病病程,头痛管理方法,护理质量,治疗的安全性和有效性,预测治疗反应的因素,卫生保健资源利用,临床医生遵守指南,和成本效益。书记官处的数据对许多利益攸关方来说都很有价值,包括头痛患者和他们的照顾者,医疗保健提供者,科学家,医疗保健系统,监管机构,制药公司,雇主,和政策制定者。该国际头痛学会文件可作为开发基于临床的头痛登记处的指导。使用注册数据需要正式的研究协议,包括:1)研究目标;2)数据收集方法,协调,分析,隐私,和保护;3)保护人类受试者的方法;和4)出版和传播计划。根据他们的目标,头痛登记处应包括经过验证的头痛特异性问卷,患者报告的结果指标,在研究中一致使用的数据元素(即,“通用数据元素”),和医疗记录数据。在其他数据类型中,登记册可能与医疗保健和药房索赔数据相关联,生物标本,和神经影像数据。头痛诊断应根据国际头痛疾病分类诊断标准进行。来自精心设计的头痛登记处的数据可以提供对这些特征的广泛和新颖的见解,负担,和治疗头痛疾病,并最终导致改善头痛患者的管理。
    Clinic-based headache registries collect data for a wide variety of purposes including delineating disease characteristics, longitudinal natural disease courses, headache management approaches, quality of care, treatment safety and effectiveness, factors that predict treatment response, health care resource utilization, clinician adherence to guidelines, and cost-effectiveness. Registry data are valuable for numerous stakeholders, including individuals with headache disorders and their caregivers, healthcare providers, scientists, healthcare systems, regulatory authorities, pharmaceutical companies, employers, and policymakers. This International Headache Society document may serve as guidance for developing clinic-based headache registries. Use of registry data requires a formal research protocol that includes: 1) research aims; 2) methods for data collection, harmonization, analysis, privacy, and protection; 3) methods for human subject protection; and 4) publication and dissemination plans. Depending upon their objectives, headache registries should include validated headache-specific questionnaires, patient reported outcome measures, data elements that are used consistently across studies (i.e., \"common data elements\"), and medical record data. Amongst other data types, registries may be linked to healthcare and pharmacy claims data, biospecimens, and neuroimaging data. Headache diagnoses should be made according to the International Classification of Headache Disorders diagnostic criteria. The data from well-designed headache registries can provide wide-ranging and novel insights into the characteristics, burden, and treatment of headache disorders and ultimately lead to improvements in the management of patients with headache.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    肌萎缩侧索硬化症(ALS)是最常见的运动神经元疾病,其原因尚不清楚。与该疾病的常染色体显性形式有关的第一个ALS基因是SOD1。这个基因有很高的罕见变异率,正确的分类对ALS诊断至关重要.在这项研究中,我们重新评估了所有先前报道的来自ALSoD的SOD1变体(n=202)的分类,MinE项目,和内部数据库,通过将ACMG-AMP标准应用于ALS。新的生物信息学分析,频率额定值,并对功能研究进行了彻底的搜索。我们还提出了调整标准强度,描述了如何将其应用于SOD1变体。根据PS3标准的修改权重,大多数先前报道的变体已被重新分类为可能的致病性和致病性。强调体内或体外功能研究如何确定其解释和分类。此外,这项研究揭示了开放数据库之间注释的一致性和不一致性,表明需要专家审查,以适应特定疾病的变异研究。的确,在复杂的疾病中,比如ALS,寡基因遗传,必须考虑作为危险因素的基因的存在和渗透的减少。总的来说,ALS的诊断仍然是临床的,改进变异分类可以支持遗传数据作为诊断标准。
    Amyotrophic lateral sclerosis (ALS) is the most common type of motor neuron disease whose causes are unclear. The first ALS gene associated with the autosomal dominant form of the disease was SOD1. This gene has a high rate of rare variants, and an appropriate classification is essential for a correct ALS diagnosis. In this study, we re-evaluated the classification of all previously reported SOD1 variants (n = 202) from ALSoD, project MinE, and in-house databases by applying the ACMG-AMP criteria to ALS. New bioinformatics analysis, frequency rating, and a thorough search for functional studies were performed. We also proposed adjusting criteria strength describing how to apply them to SOD1 variants. Most of the previously reported variants have been reclassified as likely pathogenic and pathogenic based on the modified weight of the PS3 criterion, highlighting how in vivo or in vitro functional studies are determining their interpretation and classification. Furthermore, this study reveals the concordance and discordance of annotations between open databases, indicating the need for expert review to adapt the study of variants to a specific disease. Indeed, in complex diseases, such as ALS, the oligogenic inheritance, the presence of genes that act as risk factors and the reduced penetration must be considered. Overall, the diagnosis of ALS remains clinical, and improving variant classification could support genetic data as diagnostic criteria.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    藜麦是一种原产于安第斯山脉的作物,但生长更广泛,具有进一步扩展的遗传潜力。由于藜麦的表型可塑性,品种需要跨年和多个地点进行评估。为了提高全球实地试验之间的可比性,并促进合作,试验的组成部分需要保持一致,包括收集数据的类型和方法。这里,提出了一个国际开放获取框架,用于对各种藜麦特征进行表型分析,以促进系统的农艺,藜麦的生理和遗传特性,用于作物适应和改良。成熟植物表型分析是本文研究的重点,包括详细描述和提供表型卡,以促进数据收集的一致性。描述了基于遥感技术的多时相表型高通量方法。提出了用于对种子进行更高通量的收获后表型鉴定的工具。建议采用藜麦田间试验的指南,包括收集环境数据和设计具有统计稳健性的布局。朝着与主要谷类作物一致的方向开发藜麦资源,创建了一个数据库。藜麦发芽平台将成为全球藜麦研究人员的中央数据存储库。
    Quinoa is a crop originating in the Andes but grown more widely and with the genetic potential for significant further expansion. Due to the phenotypic plasticity of quinoa, varieties need to be assessed across years and multiple locations. To improve comparability among field trials across the globe and to facilitate collaborations, components of the trials need to be kept consistent, including the type and methods of data collected. Here, an internationally open-access framework for phenotyping a wide range of quinoa features is proposed to facilitate the systematic agronomic, physiological and genetic characterization of quinoa for crop adaptation and improvement. Mature plant phenotyping is a central aspect of this paper, including detailed descriptions and the provision of phenotyping cards to facilitate consistency in data collection. High-throughput methods for multi-temporal phenotyping based on remote sensing technologies are described. Tools for higher-throughput post-harvest phenotyping of seeds are presented. A guideline for approaching quinoa field trials including the collection of environmental data and designing layouts with statistical robustness is suggested. To move towards developing resources for quinoa in line with major cereal crops, a database was created. The Quinoa Germinate Platform will serve as a central repository of data for quinoa researchers globally.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Institutional arthroplasty registries are very popular nowadays; however, very few efforts have been made in order to standardize the information to be collected, thus limiting the possibility of inter-institutional data interpretation. This manuscript reports the results of a single-country consensus designed to define the minimum standardized dataset to be recorded within an institutional arthroplasty registry.
    A national consensus was carried out among all members of the Colombian Society of Hip and Knee Surgeons using the Delphi method. Eleven questions and answers comprising every potential domain of an institutional registry of hip and knee arthroplasty were defined. According to the methodology, anonymous voting and multiple discussion rounds were performed. Three levels of agreement were defined: Strong consensus: equal to or greater than 80%, weak consensus between 70 and 79.9%, and no consensus below 70%.
    All of the questions reached consensus level. The minimum dataset was defined to include demographic and clinical information, intraoperative and implant details, follow-up and early complications, implant survival, and functional outcome scores, as well as the validation model to assess information quality within the database. Currently, this dataset is being implemented voluntarily by the members of our national society.
    A national consensus is a feasible method to build homogeneous arthroplasty registries. We recommend such an exercise since it establishes the basis to compare and add data between institutions and the joint analysis of said information in a national registry.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号