annotation

注释
  • 文章类型: Journal Article
    增强适应性免疫受体库测序(AIRR-seq)数据分析的可重复性和可理解性对于科学进步至关重要。本研究提供了可重现的AIRR-seq数据分析指南,以及带有全面文档的现成管道集合。为此,使用ViaFoundry实现了十个常见的管道,管道管理和自动化的用户友好的界面。这伴随着版本化的容器,文档和归档功能。强调了预处理分析步骤的自动化以及根据特定研究需求修改管道参数的能力。AIRR-seq数据分析对不同的参数和设置高度敏感;使用此处提供的指南,证明了重现以前发表的结果的能力。这项工作促进了透明度,再现性,以及在AIRR-SEQ数据分析方面的合作,作为处理和记录其他研究领域生物信息学管道的模型。
    Enhancing the reproducibility and comprehension of adaptive immune receptor repertoire sequencing (AIRR-seq) data analysis is critical for scientific progress. This study presents guidelines for reproducible AIRR-seq data analysis, and a collection of ready-to-use pipelines with comprehensive documentation. To this end, ten common pipelines were implemented using ViaFoundry, a user-friendly interface for pipeline management and automation. This is accompanied by versioned containers, documentation and archiving capabilities. The automation of pre-processing analysis steps and the ability to modify pipeline parameters according to specific research needs are emphasized. AIRR-seq data analysis is highly sensitive to varying parameters and setups; using the guidelines presented here, the ability to reproduce previously published results is demonstrated. This work promotes transparency, reproducibility, and collaboration in AIRR-seq data analysis, serving as a model for handling and documenting bioinformatics pipelines in other research domains.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们提出了一种新颖的三阶段FIND-RESOLVE-LABEL工作流程,用于众包注释,以减少任务说明中的歧义,因此,提高注释质量。阶段1(FIND)要求人群在给定任务说明的情况下找到正确标签似乎不明确的示例。工作人员还被要求提供描述由所发现的特定实例所体现的模糊概念的短标签。我们比较了协作与此阶段的非协作设计。在阶段2(解决),请求者选择这些模糊示例中的一个或多个来标记(解决歧义)。新的标签被自动地注入回到任务指令中以提高清晰度。最后,在阶段3(标签)中,工人使用带有澄清示例的修订指南执行实际注释。我们使用这些示例比较了三种设计:仅示例,仅标签,或者两者兼而有之。我们使用亚马逊的MechanicalTurk报告了六个任务设计的图像标记实验。结果显示了改进的注释准确性和关于众包注释任务的有效设计的进一步见解。
    We propose a novel three-stage FIND-RESOLVE-LABEL workflow for crowdsourced annotation to reduce ambiguity in task instructions and, thus, improve annotation quality. Stage 1 (FIND) asks the crowd to find examples whose correct label seems ambiguous given task instructions. Workers are also asked to provide a short tag that describes the ambiguous concept embodied by the specific instance found. We compare collaborative vs. non-collaborative designs for this stage. In Stage 2 (RESOLVE), the requester selects one or more of these ambiguous examples to label (resolving ambiguity). The new label(s) are automatically injected back into task instructions in order to improve clarity. Finally, in Stage 3 (LABEL), workers perform the actual annotation using the revised guidelines with clarifying examples. We compare three designs using these examples: examples only, tags only, or both. We report image labeling experiments over six task designs using Amazon\'s Mechanical Turk. Results show improved annotation accuracy and further insights regarding effective design for crowdsourced annotation tasks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration.
    Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups.
    After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established.
    While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    The pupillary light reflex (PLR) is a routinely utilized clinical test to quickly assess integrity of subcortical light perception pathways in patients. While interpretation is simple for ophthalmologists, interestingly discrepancy occurs in annotation of the test results, especially for the consensual response. An email survey sent to diplomates of either the American or European Colleges of Veterinary Ophthalmologists (ACVO and ECVO, respectively), requesting use of a \'direct/consensual\' annotation convention, showed 58% of respondents preferred one convention while 39% preferred a different convention. The majority preferred convention was different between ACVO and ECVO respondents. Standardization of PLR annotation convention across specialists is recommended for clarity in medical record keeping and communication among colleagues.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    MicroRNA regulation of developmental and cellular processes is a relatively new field of study, and the available research data have not been organized to enable its inclusion in pathway and network analysis tools. The association of gene products with terms from the Gene Ontology is an effective method to analyze functional data, but until recently there has been no substantial effort dedicated to applying Gene Ontology terms to microRNAs. Consequently, when performing functional analysis of microRNA data sets, researchers have had to rely instead on the functional annotations associated with the genes encoding microRNA targets. In consultation with experts in the field of microRNA research, we have created comprehensive recommendations for the Gene Ontology curation of microRNAs. This curation manual will enable provision of a high-quality, reliable set of functional annotations for the advancement of microRNA research. Here we describe the key aspects of the work, including development of the Gene Ontology to represent this data, standards for describing the data, and guidelines to support curators making these annotations. The full microRNA curation guidelines are available on the GO Consortium wiki (http://wiki.geneontology.org/index.php/MicroRNA_GO_annotation_manual).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号