automation tools

  • 文章类型: Journal Article
    目的:本文描述了在证据综合项目中可以考虑的几种自动化工具和软件,并为其在范围审查进行中的整合提供了指导。
    方法:这项工作中提供的指导是根据范围审查和与JBI范围审查方法论小组协商的结果改编的。
    结果:本文介绍了几种可靠的,经过验证的自动化工具和软件,可用于增强范围审查的执行。系统审查自动化的发展,以及最近的范围审查,不断发展。我们按照JBI的方法论指导建议的关键步骤,详细介绍了几个有用的工具,以进行范围审查,包括团队建立,协议开发,搜索,重复数据消除,筛选标题和摘要,数据提取,数据图表,和报告写作。虽然我们包括几个可靠的工具和软件,可用于范围审查的自动化,提到的工具有一些限制。例如,有些仅提供英文版本,并且它们缺乏与其他工具的集成导致互操作性有限。
    结论:本文重点介绍了几个有用的自动化工具和软件程序,用于开展范围审查的每个步骤。本指南有可能为旨在开发知情证据的合作努力提供信息,集成自动化工具和软件包,用于增强高质量范围审查的进行。
    OBJECTIVE: This paper describes several automation tools and software that can be considered during evidence synthesis projects and provides guidance for their integration in the conduct of scoping reviews.
    METHODS: The guidance presented in this work is adapted from the results of a scoping review and consultations with the JBI Scoping Review Methodology group.
    RESULTS: This paper describes several reliable, validated automation tools and software that can be used to enhance the conduct of scoping reviews. Developments in the automation of systematic reviews, and more recently scoping reviews, are continuously evolving. We detail several helpful tools in order of the key steps recommended by the JBI\'s methodological guidance for undertaking scoping reviews including team establishment, protocol development, searching, de-duplication, screening titles and abstracts, data extraction, data charting, and report writing. While we include several reliable tools and software that can be used for the automation of scoping reviews, there are some limitations to the tools mentioned. For example, some are available in English only and their lack of integration with other tools results in limited interoperability.
    CONCLUSIONS: This paper highlighted several useful automation tools and software programs to use in undertaking each step of a scoping review. This guidance has the potential to inform collaborative efforts aiming at the development of evidence informed, integrated automation tools and software packages for enhancing the conduct of high-quality scoping reviews.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:研究人员在多个数据库中进行高质量的系统评价搜索以识别相关证据。然而,同一出版物通常从多个数据库中检索。识别和删除此类重复项(“重复数据删除”)可能非常耗时,但未能删除这些引用可能会导致错误地包含重复数据。许多现有的工具不够灵敏,缺乏与其他工具的互操作性,不能自由访问,或者在没有编程知识的情况下很难使用。这里,我们报告了我们的自动系统搜索去重器(ASYSD)的性能,一种新颖的工具,用于对生物医学评论进行系统搜索的自动重复数据删除。
    方法:我们评估了ASYSD在5个不同大小的未见生物医学系统搜索数据集上的性能(1845-79,880次引用)。我们将ASCySD与EndNote的自动重复数据删除选项以及系统审查助理重复数据删除模块(SRA-DM)的性能进行了比较。
    结果:ASYSD比SRA-DM或EndNote更多重复,在不同数据集中的灵敏度为0.95至0.99。假阳性率与人类表现相当,特异性>0.99。该工具花费不到1小时来识别和删除每个数据集中的重复项。
    结论:对于生物医学系统评价中的重复删除,ASYSD是一个高度敏感的,可靠,和节省时间的工具。它是开源的,可以作为R包和用户友好的Web应用程序在线免费提供。
    Researchers performing high-quality systematic reviews search across multiple databases to identify relevant evidence. However, the same publication is often retrieved from several databases. Identifying and removing such duplicates (\"deduplication\") can be extremely time-consuming, but failure to remove these citations can lead to the wrongful inclusion of duplicate data. Many existing tools are not sensitive enough, lack interoperability with other tools, are not freely accessible, or are difficult to use without programming knowledge. Here, we report the performance of our Automated Systematic Search Deduplicator (ASySD), a novel tool to perform automated deduplication of systematic searches for biomedical reviews.
    We evaluated ASySD\'s performance on 5 unseen biomedical systematic search datasets of various sizes (1845-79,880 citations). We compared the performance of ASySD with EndNote\'s automated deduplication option and with the Systematic Review Assistant Deduplication Module (SRA-DM).
    ASySD identified more duplicates than either SRA-DM or EndNote, with a sensitivity in different datasets of 0.95 to 0.99. The false-positive rate was comparable to human performance, with a specificity of > 0.99. The tool took less than 1 h to identify and remove duplicates within each dataset.
    For duplicate removal in biomedical systematic reviews, ASySD is a highly sensitive, reliable, and time-saving tool. It is open source and freely available online as both an R package and a user-friendly web application.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Meta-Analysis
    Systematic reviews (SRs) are invaluable evidence syntheses, widely used in biomedicine and other scientific areas. Tremendous resources are being spent on the production and updating of SRs. There is a continuous need to automatize the process and use the workforce and resources to make it faster and more efficient.
    Information gathered by previous EVBRES research was used to construct a questionnaire for round 1 which was partly quantitative, partly qualitative. Fifty five experienced SR authors were invited to participate in a Delphi study (DS) designed to identify the most promising areas and methods to improve the efficient production and updating of SRs. Topic questions focused on which areas of SRs are most time/effort/resource intensive and should be prioritized in further research. Data were analysed using NVivo 12 plus, Microsoft Excel 2013 and SPSS. Thematic analysis findings were used on the topics on which agreement was not reached in round 1 in order to prepare the questionnaire for round 2.
    Sixty percent (33/55) of the invited participants completed round 1; 44% (24/55) completed round 2. Participants reported average of 13.3 years of experience in conducting SRs (SD 6.8). More than two thirds of the respondents agreed/strongly agreed the following topics should be prioritized: extracting data, literature searching, screening abstracts, obtaining and screening full texts, updating SRs, finding previous SRs, translating non-English studies, synthesizing data, project management, writing the protocol, constructing the search strategy and critically appraising. Participants have not considered following areas as priority: snowballing, GRADE-ing, writing SR, deduplication, formulating SR question, performing meta-analysis.
    Data extraction was prioritized by the majority of participants as an area that needs more research/methods development. Quality of available language translating tools has dramatically increased over the years (Google translate, DeepL). The promising new tool for snowballing emerged (Citation Chaser). Automation cannot substitute human judgement where complex decisions are needed (GRADE-ing).
    Study protocol was registered at https://osf.io/bp2hu/ .
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目前,执法和法律顾问正在大量利用社交媒体平台来轻松访问与非法事件筹备者相关的数据。然而,由于异构和非结构化数据和隐私法律,访问这些公开可用的信息以合法使用在技术上具有挑战性,并且在法律上错综复杂,从而为调查人员带来了大量的认知要求苛刻的案件。因此,至关重要的是开发解决方案和工具,可以帮助调查人员在他们的工作和决策。自动化数字取证不仅仅是一个技术问题;技术问题总是与隐私和法律事务联系在一起。这里,我们引入了一种多层自动化方法,该方法解决了在线社交网络取证中从收集到证据分析的自动化问题。最后,我们提出了一组基于域相关性的分析算子。这些操作符可以嵌入软件工具中,以帮助调查人员得出现实的结论。这些运算符是使用Twitter本体实现的,并通过案例研究进行了测试。这项研究描述了在线社交网络上取证自动化的概念验证方法。
    Currently, law enforcement and legal consultants are heavily utilizing social media platforms to easily access data associated with the preparators of illegitimate events. However, accessing this publicly available information for legal use is technically challenging and legally intricate due to heterogeneous and unstructured data and privacy laws, thus generating massive workloads of cognitively demanding cases for investigators. Therefore, it is critical to develop solutions and tools that can assist investigators in their work and decision making. Automating digital forensics is not exclusively a technical problem; the technical issues are always coupled with privacy and legal matters. Here, we introduce a multi-layer automation approach that addresses the automation issues from collection to evidence analysis in online social network forensics. Finally, we propose a set of analysis operators based on domain correlations. These operators can be embedded in software tools to help the investigators draw realistic conclusions. These operators are implemented using Twitter ontology and tested through a case study. This study describes a proof-of-concept approach for forensic automation on online social networks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    我们调查了系统审阅者使用的系统审阅自动化工具,卫生技术评估员和临床指南开发人员。
    在线,16个问题的调查分布在几个证据综合中,卫生技术评估和指南开发组织。我们询问受访者他们使用和放弃了哪些工具,他们使用工具的频率和时间,他们感知的时间节省和准确性,和所需的新工具。使用描述性统计来报告结果。
    共有253名受访者完成了调查;89%的受访者使用过系统评价自动化工具——最常见的是筛查(79%)。受访者“前3名”工具包括:Covidence(45%),RevMan(35%),Rayyan和GRADEPro(均为22%);最常被遗弃的是Rayyan(19%),Covidence(15%),DistillerSR(14%)和RevMan(13%)。工具节省了时间(80%)并提高了准确性(54%)。受访者自学如何使用工具(72%);缺乏知识是采用工具的最常见障碍(51%)。建议在搜索和数据提取阶段开发新的工具。
    自动化工具可能会在高质量和及时的审查中发挥越来越重要的作用。在培训和传播自动化工具方面需要进一步的工作,并确保它们符合进行系统审查的人员的理想特征。
    We investigated systematic review automation tool use by systematic reviewers, health technology assessors and clinical guideline developerst.
    An online, 16-question survey was distributed across several evidence synthesis, health technology assessment and guideline development organizations. We asked the respondents what tools they use and abandon, how often and when do they use the tools, their perceived time savings and accuracy, and desired new tools. Descriptive statistics were used to report the results.
    A total of 253 respondents completed the survey; 89% have used systematic review automation tools - most frequently whilst screening (79%). Respondents\' \"top 3\" tools included: Covidence (45%), RevMan (35%), Rayyan and GRADEPro (both 22%); most commonly abandoned were Rayyan (19%), Covidence (15%), DistillerSR (14%) and RevMan (13%). Tools saved time (80%) and increased accuracy (54%). Respondents taught themselves to how to use the tools (72%); lack of knowledge was the most frequent barrier to tool adoption (51%). New tool development was suggested for the searching and data extraction stages.
    Automation tools will likely have an increasingly important role in high-quality and timely reviews. Further work is required in training and dissemination of automation tools and ensuring they meet the desirable features of those conducting systematic reviews.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    对科学文献的系统审查可以成为支持监管机构决策日常工作的重要信息来源,特别是在监管经验仍然有限的创新技术领域。在过去的几十年中,纳米技术领域的重要研究活动产生了大量的出版物。然而,即使公布的数据可以提供相关信息,科学文章往往质量各异,几乎不可能以系统的方式手动处理和评估如此大量的数据。在这个可行性研究中,我们调查了开放获取自动化工具在多大程度上可以支持科学文献中报道的纳米材料对健康应用的毒性作用的系统综述.在这项研究中,我们使用了一系列可用的工具来执行系统审查的初始步骤,例如目标搜索,数据管理和抽象筛选。这项工作得到了内部开发的工具的补充,该工具允许我们提取文章的特定部分,如材料和方法部分或结果部分,我们可以在其中执行后续的文本分析。我们根据所报道的纳米材料表征的质量标准对文章进行排名,并提取了由不同类型的纳米材料引起的最常见的毒性作用。即使需要进一步证明自动化工具的可靠性和适用性,这项研究证明了通过在分层策略中使用自动化系统来利用科学文献中的信息的潜力。
    Systematic reviews of the scientific literature can be an important source of information supporting the daily work of the regulators in their decision making, particularly in areas of innovative technologies where the regulatory experience is still limited. Significant research activities in the field of nanotechnology resulted in a huge number of publications in the last decades. However, even if the published data can provide relevant information, scientific articles are often of diverse quality, and it is nearly impossible to manually process and evaluate such amount of data in a systematic manner. In this feasibility study, we investigated to what extent open-access automation tools can support a systematic review of toxic effects of nanomaterials for health applications reported in the scientific literature. In this study, we used a battery of available tools to perform the initial steps of a systematic review such as targeted searches, data curation and abstract screening. This work was complemented with an in-house developed tool that allowed us to extract specific sections of the articles such as the materials and methods part or the results section where we could perform subsequent text analysis. We ranked the articles according to quality criteria based on the reported nanomaterial characterisation and extracted most frequently described toxic effects induced by different types of nanomaterials. Even if further demonstration of the reliability and applicability of automation tools is necessary, this study demonstrated the potential to leverage information from the scientific literature by using automation systems in a tiered strategy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Letter
    The fourth meeting of the International Collaboration for Automation of Systematic Reviews (ICASR) was held 5-6 November 2019 in The Hague, the Netherlands. ICASR is an interdisciplinary group whose goal is to maximize the use of technology for conducting rapid, accurate, and efficient systematic reviews of scientific evidence. The group seeks to facilitate the development and acceptance of automated techniques for systematic reviews. In 2018, the major themes discussed were the transferability of automation tools (i.e., tools developed for other purposes that might be used by systematic reviewers), the automated recognition of study design in multiple disciplines and applications, and approaches for the evaluation of automation tools.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    这里,我们概述了一种应用现有机器学习(ML)方法的方法,在对临床前动物研究进行广泛和浅层的系统综述中辅助引文筛选.目的是实现与人类筛查相当的高性能算法,可以减少进行系统审查的这一步所需的人力资源。
    我们在引文筛选阶段将ML方法应用于抑郁症动物模型的广泛系统综述。我们测试了两种独立开发的ML方法,它们使用不同的分类模型和特征集。我们使用敏感性在一组看不见的验证论文上记录了ML方法的性能,特异性和准确性。我们的目标是实现95%的灵敏度和最大化的特异性。提供最准确预测的分类模型应用于数据集中剩余的未见记录,并将用于临床前生物医学科学系统综述的下一阶段。我们使用交叉验证技术将ML包含可能性分数分配给人类筛选的记录,识别人类筛查过程中的潜在错误(错误分析)。
    基于从5749条记录的训练集学习,ML方法达到98.7%的灵敏度,纳入患病率为13.2%。最高特异性达到86%。在独立的验证数据集上评估性能。使用ML模型中分配的包含可能性成功识别了训练和验证集中的人为错误,以突出差异。在校正的数据集上训练ML算法提高了算法的特异性而不损害灵敏度。错误分析校正导致灵敏度和特异性提高3%,这提高了ML算法的精度和准确性。
    这项工作证实了ML算法在临床前动物研究的系统评价中用于筛选的性能和应用。它强调了ML算法用于识别人为错误的新颖用法。这需要在其他纳入流行程度不同的审查中得到证实,但代表了在系统审查方法中整合人类决策和自动化的一种有前途的方法。
    Here, we outline a method of applying existing machine learning (ML) approaches to aid citation screening in an on-going broad and shallow systematic review of preclinical animal studies. The aim is to achieve a high-performing algorithm comparable to human screening that can reduce human resources required for carrying out this step of a systematic review.
    We applied ML approaches to a broad systematic review of animal models of depression at the citation screening stage. We tested two independently developed ML approaches which used different classification models and feature sets. We recorded the performance of the ML approaches on an unseen validation set of papers using sensitivity, specificity and accuracy. We aimed to achieve 95% sensitivity and to maximise specificity. The classification model providing the most accurate predictions was applied to the remaining unseen records in the dataset and will be used in the next stage of the preclinical biomedical sciences systematic review. We used a cross-validation technique to assign ML inclusion likelihood scores to the human screened records, to identify potential errors made during the human screening process (error analysis).
    ML approaches reached 98.7% sensitivity based on learning from a training set of 5749 records, with an inclusion prevalence of 13.2%. The highest level of specificity reached was 86%. Performance was assessed on an independent validation dataset. Human errors in the training and validation sets were successfully identified using the assigned inclusion likelihood from the ML model to highlight discrepancies. Training the ML algorithm on the corrected dataset improved the specificity of the algorithm without compromising sensitivity. Error analysis correction leads to a 3% improvement in sensitivity and specificity, which increases precision and accuracy of the ML algorithm.
    This work has confirmed the performance and application of ML algorithms for screening in systematic reviews of preclinical animal studies. It has highlighted the novel use of ML algorithms to identify human error. This needs to be confirmed in other reviews with different inclusion prevalence levels, but represents a promising approach to integrating human decisions and automation in systematic review methodology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号