PubMed

pubmed
  • 文章类型: Journal Article
    背景:中断时间序列(ITS)研究对人口水平干预的系统评价做出了重要贡献。我们旨在开发和验证搜索过滤器,以检索MEDLINE和PubMed中的ITS研究。
    方法:使用文本挖掘对总共1017项已知ITS研究(2013-2017年发布)进行了分析,以生成候选术语。使用1398个时间序列研究的对照集来选择区分术语。迭代地测试候选项的各种组合以生成三个搜索过滤器。一组独立的700项ITS研究被用来验证过滤器的敏感性。过滤器在OvidMEDLINE中进行了测试运行,并随机筛选了用于ITS研究的记录,以确定其精度。最后,将所有MEDLINE过滤器转换为PubMed格式,并评估其在PubMed中的敏感性.
    结果:在MEDLINE中创建了三个搜索过滤器:具有高精度(78%;95%CI74%-82%)但中等灵敏度(63%;59%-66%)的精度最大化过滤器,最适合在资源有限的情况下进行筛选研究;灵敏度和精度最大化的过滤器具有较高的灵敏度(81%;77%-83%),但精度较低(32%;28%-36%),提供权宜之计和全面性之间的平衡;和灵敏度最大化滤波器具有高灵敏度(88%;85%-90%),但可能非常低的精度,与特定的内容术语结合使用时有用。对于PubMed版本也发现了类似的敏感度估计。
    结论:我们的过滤器在全面性和筛查工作量之间取得了不同的平衡,并适合不同的研究需求。如果作者在标题中确定了ITS设计,则ITS研究的检索将得到改善。
    BACKGROUND: Interrupted time series (ITS) studies contribute importantly to systematic reviews of population-level interventions. We aimed to develop and validate search filters to retrieve ITS studies in MEDLINE and PubMed.
    METHODS: A total of 1017 known ITS studies (published 2013-2017) were analysed using text mining to generate candidate terms. A control set of 1398 time-series studies were used to select differentiating terms. Various combinations of candidate terms were iteratively tested to generate three search filters. An independent set of 700 ITS studies was used to validate the filters\' sensitivities. The filters were test-run in Ovid MEDLINE and the records randomly screened for ITS studies to determine their precision. Finally, all MEDLINE filters were translated to PubMed format and their sensitivities in PubMed were estimated.
    RESULTS: Three search filters were created in MEDLINE: a precision-maximising filter with high precision (78%; 95% CI 74%-82%) but moderate sensitivity (63%; 59%-66%), most appropriate when there are limited resources to screen studies; a sensitivity-and-precision-maximising filter with higher sensitivity (81%; 77%-83%) but lower precision (32%; 28%-36%), providing a balance between expediency and comprehensiveness; and a sensitivity-maximising filter with high sensitivity (88%; 85%-90%) but likely very low precision, useful when combined with specific content terms. Similar sensitivity estimates were found for PubMed versions.
    CONCLUSIONS: Our filters strike different balances between comprehensiveness and screening workload and suit different research needs. Retrieval of ITS studies would be improved if authors identified the ITS design in the titles.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:持续证据监测是生活指南的一个组成部分。澳大利亚中风指南包括100个临床主题的建议,自2018年以来一直“活着”。
    目的:描述建立和评估澳大利亚现行卒中指南的证据监测系统的方法。
    方法:我们基于对2017年卒中指南搜索的分析,开发了一个实用的监测系统,并通过评估对指南建议的潜在影响来评估其可靠性。每月监控搜索检索和筛选工作量,以及指南建议的更改频率。
    结果:证据监测以效率和可持续性的实际考虑为指导。涵盖所有指南主题的单一PubMed搜索,仅限于系统评价和随机试验,每月运行一次。该搜索每月检索约400条记录,其中第六条记录被分类到指南面板以供进一步考虑。使用Epistemonikos和Cochrane中风试验注册的评估证明了采用这种更具限制性的方法的鲁棒性。与指南团队合作进行设计,实施和评估监督对于优化方法至关重要。
    结论:当采用务实的方法时,对大型生活指南进行每月证据监测是可行和可持续的。
    BACKGROUND: Continual evidence surveillance is an integral feature of living guidelines. The Australian Stroke Guidelines include recommendations on 100 clinical topics and have been \'living\' since 2018.
    OBJECTIVE: To describe the approach for establishing and evaluating an evidence surveillance system for the living Australian Stroke Guidelines.
    METHODS: We developed a pragmatic surveillance system based on an analysis of the searches for the 2017 Stroke Guidelines and evaluated its reliability by assessing the potential impact on guideline recommendations. Search retrieval and screening workload are monitored monthly, together with the frequency of changes to the guideline recommendations.
    RESULTS: Evidence surveillance was guided by practical considerations of efficiency and sustainability. A single PubMed search covering all guideline topics, limited to systematic reviews and randomised trials, is run monthly. The search retrieves about 400 records a month of which a sixth are triaged to the guideline panels for further consideration. Evaluations with Epistemonikos and the Cochrane Stroke Trials Register demonstrated the robustness of adopting this more restrictive approach. Collaborating with the guideline team in designing, implementing and evaluating the surveillance is essential for optimising the approach.
    CONCLUSIONS: Monthly evidence surveillance for a large living guideline is feasible and sustainable when applying a pragmatic approach.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Systematic Review
    目的:提出管理肿瘤肿块形成的各种策略及其相应的术后结局。
    方法:我们遵循系统评价和荟萃分析(PRISMA)的首选报告项目指南和方案进行了系统文献综述。我们搜索了PubMed和EMBASE数据库,筛选标题和摘要,并进一步评估全文出版物以选择相关研究。此外,在PubMed上对其他相关文章进行了叙述性综述.纳入标准是病例报告,队列和临床试验。排除动物研究。
    结果:纳入6例患者,大多数人在髓内损伤后患有AISA(66.7%),只有一名患者患有AISD(16.65%)。髓内肿块形成的发现时间约为5至14年。大多数情况下(66.7%)进行了手术干预,其中3例手术病例报告有所改善(75%)。大部分病例(83.3%)为宫颈病变,仅1例(16.7%)为胸部病变。
    结论:由于描述病例的稀缺性,这种肿瘤没有特殊的治疗方法。虽然我们的病人在保守治疗后保持稳定,其他研究显示,肿块切除后症状有所改善。由于临床特征的多样性,必须进一步研究这种并发症的治疗方法。
    To present strategies for managing tumor mass formation and their corresponding postoperative outcomes.
    We conducted a systematic literature review following the guidelines and protocol of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). We searched the PubMed and EMBASE databases, screened titles and abstracts, and further evaluated full-text publications to select relevant studies. Additionally, a narrative review of other pertinent articles on PubMed was performed. Case reports, cohort studies, and clinical trials were included. Animal studies were excluded.
    Of 6 patients enrolled in this study, most had American Spinal Injury Association Impairment Scale grade A (66.7%) following intramedullary injury, and 1 patient had American Spinal Injury Association Impairment Scale grade D (16.65%). The discovery time of the intramedullary mass formation ranged from approximately 5 to 14 years. Surgical intervention was performed in most cases (66.7%), with improvement reported in 3 of the surgical cases (75%). The majority of cases (83.3%) involved cervical lesions, while only 1 case (16.7%) involved a thoracic lesion.
    Due to the scarcity of described cases, there is no specific treatment for this tumor. Although our patient remained stable after conservative treatment, other studies have shown improvement in symptoms after mass resection. It is essential that the management of this complication be researched further due to the variety of clinical characteristics presented.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本文提出了一种综合而简便的文献计量分析方法。在最近的研究活动中对所提出的方法进行了评估,以突出物联网在医疗保健应用中的作用。使用不同的工具进行文献计量学研究,以探索不同研究领域的广度和深度。然而,这些方法仅考虑WebofScience或Scopus数据进行文献计量分析。此外,文献计量分析尚未完全用于检查物联网在医疗设备及其应用中的功能。需要一种简单的方法来对来自许多来源的数据进行单一集成分析,而不仅仅是WebofScience或Scopus。一些文献计量学研究合并了WebofScience和Scopus,以进行一项综合研究。本文介绍了一种可用于跨多个数据库的单个文献计量分析的方法。三个免费提供的工具,Excel,灭亡或出版和R包Bibliometrix,用于目的。对拟议的文献计量学方法进行了评估,以进行与医疗物联网(IoMT)及其在医疗机构中的应用相关的研究。制定了纳入/排除标准,以探索来自七个最大数据库的相关研究,包括Scopus,WebofScience,IEEE,ACM数字图书馆,PubMed,科学直接和谷歌学者。研究重点关注出版物数量等因素,每篇论文的引文,合作研究产出,h-Index,主要研究和医疗保健应用领域。本研究的数据来自2012年至2022年与IoMT及其在医疗保健中的应用有关的七个最大的学术数据库。文献计量数据分析在IoMT技术及其在医疗保健研究中的应用中产生了不同的研究主题。该研究还确定了该领域的重要研究领域。领先的研究国家及其贡献是数据分析的另一个结果。最后,提出了未来的研究方向,供研究人员进一步详细探索这一领域。
    This paper presents an integrated and easy methodology for bibliometric analysis. The proposed methodology is evaluated on recent research activities to highlight the role of the Internet of Things in healthcare applications. Different tools are used for bibliometric studies to explore the breadth and depth of different research areas. However, these Methods consider only the Web of Science or Scopus data for bibliometric analysis. Furthermore, bibliometric analysis has not been fully utilised to examine the capabilities of the Internet of Things for medical devices and their applications. There is a need for an easy methodology to use for a single integrated analysis of data from many sources rather than just the Web of Science or Scopus. A few bibliometric studies merge the Web of Science and Scopus to conduct a single integrated piece of research. This paper presents a methodology that could be used for a single bibliometric analysis across multiple databases. Three freely available tools, Excel, Perish or Publish and the R package Bibliometrix, are used for the purpose. The proposed bibliometric methodology is evaluated for studies related to the Internet of Medical Things (IoMT) and its applications in healthcare settings. An inclusion/exclusion criterion is developed to explore relevant studies from the seven largest databases, including Scopus, Web of Science, IEEE, ACM digital library, PubMed, Science Direct and Google Scholar. The study focuses on factors such as the number of publications, citations per paper, collaborative research output, h-Index, primary research and healthcare application areas. Data for this study are collected from the seven largest academic databases for 2012 to 2022 related to IoMT and their applications in healthcare. The bibliometric data analysis generated different research themes within IoMT technologies and their applications in healthcare research. The study has also identified significant research areas in this field. The leading research countries and their contributions are another output from the data analysis. Finally, future research directions are proposed for researchers to explore this area in further detail.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    关于病例报告的文献计量分析的研究通常会提供有关病例报告各个方面的有价值的信息,但缺乏分析这些出版物的调查。这是有史以来第一次研究病例报告的文献计量学文章;因此,据推测,它为这一差距提供了宝贵的贡献。搜索了PubMed和SCOPUS数据库,共获得119篇文章,但只有5例符合纳入标准.搜索中涉及的关键字是\"Bibliometrics\",\"分析\",\"病例报告\",\"caseseries\",和“文章”,而,检索病例报告的时间范围为2011-2021年.这五篇文章的常用参数用于文献计量分析,其中包括出版年份,出版物类型,每篇文章的病例报告数量,文章的主题或主题,引用,和影响因子(IF)。在确定的五篇文章中,四个在2021年出版。五分之一是病例报告,其余的都是评论型的文章。这些文章的总引用数小于10,并且这些文章的IF在0-0.007之间。文章的引用次数为一到两年或不到一年。参数的全面概述,以及用于对病例报告进行文献计量分析的最新趋势。
    The studies on bibliometric analyses of case reports usually give valuable information regarding various aspects of case reports but lack investigation analysing these publications. This is the first-ever study to examine the bibliometric articles on case reports; hence, it is hypothesized to provide a valuable contribution to this gap. PubMed and SCOPUS databases were searched, and a total of 119 articles were obtained, but only five were analyzed matching the inclusion criteria. The keywords involved in the search were \"Bibliometrics\", \"analysis\", \"case reports\", \"case series\", and \"articles\" whereas, the time range in which the case reports were searched for was between 2011-2021. Common parameters from these five articles were employed for bibliometric analysis, which included publication year, publication type, the number of case reports per article, theme or subject of the article, citation, and impact factor (IF). Out of the five articles identified, four were published in 2021. One out of five was a case report, and the rest were review-type of articles. The overall citation number of these articles was less than 10, and the IF of these articles was between 0-0.007. The number of citations of the articles was in a period of one to two years or less than one year. A comprehensive overview of the parametrises, as well as the recent trends that are being used to conduct bibliometric analysis on case reports was acquired.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    文本挖掘已被证明是建模的辅助但关键的驱动力,数据协调,和生物医学的解释。科学文献拥有丰富的信息,体现了累积的知识,仍然是机械途径的核心基础,分子数据库,模型是建立和完善的。文本挖掘提供了自动利用文本潜力的必要工具。在这项研究中,我们展示了大规模文本挖掘获得新颖见解的潜力,专注于不断发展的微生物组领域。我们首先从PubMed收集了与微生物组相关的完整摘要,并使用我们的文本挖掘和智能平台Taxila进行分析。我们使用两个案例研究来驱动文本挖掘的有用性。首先,我们通过从文本中提取地理提及来分析研究微生物组领域的地理分布和研究地点。使用此分析,我们能够对微生物组w.r.t地理分布和经济驱动因素的研究状况得出有用的见解。接下来,为了理解疾病之间的关系,微生物组,和食物是田野的中心,我们构建了这些不同概念之间的语义关系网络,这些概念是微生物组领域的核心。我们展示了这样的网络如何在没有先验知识编码的情况下获得有用的见解。
    Text mining has been shown to be an auxiliary but key driver for modeling, data harmonization, and interpretation in bio-medicine. Scientific literature holds a wealth of information and embodies cumulative knowledge and remains the core basis on which mechanistic pathways, molecular databases, and models are built and refined. Text mining provides the necessary tools to automatically harness the potential of text. In this study, we show the potential of large-scale text mining for deriving novel insights, with a focus on the growing field of microbiome. We first collected the complete set of abstracts relevant to the microbiome from PubMed and used our text mining and intelligence platform Taxila for analysis. We drive the usefulness of text mining using two case studies. First, we analyze the geographical distribution of research and study locations for the field of microbiome by extracting geo mentions from text. Using this analysis, we were able to draw useful insights on the state of research in microbiome w. r.t geographical distributions and economic drivers. Next, to understand the relationships between diseases, microbiome, and food which are central to the field, we construct semantic relationship networks between these different concepts central to the field of microbiome. We show how such networks can be useful to derive useful insight with no prior knowledge encoded.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    荟萃分析汇总了不同临床研究的结果,以评估治疗的有效性。尽管它们很重要,荟萃分析是耗时且费力的,因为它们涉及阅读数百篇研究文章和提取数据。研究文章的数量正在迅速增加,大多数荟萃分析在发表后不久就过时了,因为没有包括新的证据。从研究文章中自动提取数据可以加快荟萃分析过程,并在新结果可用时允许自动更新。在这项研究中,我们提出了一个从研究摘要中自动提取数据并进行统计分析的系统。
    我们的语料库由1011份PubMed乳腺癌随机对照试验摘要组成,注释了临床试验的核心要素:参与者,干预,Control,和结果(PICO)。我们提出了一种基于BERT的命名实体识别(NER)模型,以从研究摘要中识别PICO信息。提取PICO信息后,我们分析数字结局,以确定具有特定结局的患者数量,用于统计分析.
    NER模型提取PICO元素的精度相对较高,在大多数实体中实现F1分数大于0.80。我们通过复制现有荟萃分析的结果来评估所提出系统的性能。数据提取步骤达到了较高的精度,然而,统计分析步骤实现低性能,因为摘要有时缺乏所有必需的信息。
    我们提出了一种从研究摘要中自动提取数据并进行统计分析的系统。我们通过复制现有的荟萃分析来评估系统的性能,并且系统取得了相对较好的性能,尽管需要更多的证据。
    Meta-analyses aggregate results of different clinical studies to assess the effectiveness of a treatment. Despite their importance, meta-analyses are time-consuming and labor-intensive as they involve reading hundreds of research articles and extracting data. The number of research articles is increasing rapidly and most meta-analyses are outdated shortly after publication as new evidence has not been included. Automatic extraction of data from research articles can expedite the meta-analysis process and allow for automatic updates when new results become available. In this study, we propose a system for automatically extracting data from research abstracts and performing statistical analysis.
    Our corpus consists of 1011 PubMed abstracts of breast cancer randomized controlled trials annotated with the core elements of clinical trials: Participants, Intervention, Control, and Outcomes (PICO). We proposed a BERT-based named entity recognition (NER) model to identify PICO information from research abstracts. After extracting the PICO information, we parse numeric outcomes to identify the number of patients having certain outcomes for statistical analysis.
    The NER model extracted PICO elements with relatively high accuracy, achieving F1-scores greater than 0.80 in most entities. We assessed the performance of the proposed system by reproducing the results of an existing meta-analysis. The data extraction step achieved high accuracy, however the statistical analysis step achieved low performance because abstracts sometimes lack all the required information.
    We proposed a system for automatically extracting data from research abstracts and performing statistical analysis. We evaluated the performance of the system by reproducing an existing meta-analysis and the system achieved a relatively good performance, though more substantiation is required.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • DOI:
    文章类型: Journal Article
    Many clinical natural language processing methods rely on non-contextual word embedding (NCWE) or contextual word embedding (CWE) models. Yet, few, if any, intrinsic evaluation benchmarks exist comparing embedding representations against clinician judgment. We developed intrinsic evaluation tasks for embedding models using a corpus of radiology reports: term pair similarity for NCWEs and cloze task accuracy for CWEs. Using surveys, we quantified the agreement between clinician judgment and embedding model representations. We compare embedding models trained on a custom radiology report corpus (RRC), a general corpus, and PubMed and MIMIC-III corpora (P&MC). Cloze task accuracy was equivalent for RRC and P&MC models. For term pair similarity, P&MC-trained NCWEs outperformed all other NCWE models (ρspearman 0.61 vs. 0.27-0.44). Among models trained on RRC, fastText models often outperformed other NCWE models and spherical embeddings provided overly optimistic representations of term pair similarity.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Case Reports
    量化性能中的权衡,再现性,以及开发临床相关词嵌入的几种策略的资源需求。
    我们在PubmedCentral(PMC)开放获取子集中对所有全文手稿进行了单独的嵌入训练,其中的病例报告,英文维基百科语料库,重症监护医疗信息集市(MIMIC)III数据集,以及宾夕法尼亚大学卫生系统(UPHS)电子健康记录中的所有注释。我们在六个临床相关任务中测试了嵌入,包括死亡率预测和去识别,并使用缩放的Brier得分(SBS)和成功取消识别的笔记比例来评估性能,分别。
    来自UPHS的嵌入注意到最好的预测死亡率(SBS0.30,95%CI0.15至0.45),而维基百科嵌入表现最差(SBS0.12,95%CI-0.05至0.28)。维基百科嵌入最一致(78%的笔记)和完整的PMC语料库嵌入最不一致(48%)去识别笔记。在所有六项任务中,完整的PMC语料库展示了最一致的性能,维基百科语料库最少。语料库大小从4900万个令牌(PMC病例报告)到100亿(UPHS)不等。
    在大多数任务中,在已发布的病例报告上训练的嵌入以及在其他语料库上训练的嵌入最少。临床语料库的表现始终优于非临床语料库。没有一个语料库在所有任务中都产生了严格占主导地位的嵌入集,因此最佳的训练语料库取决于预期的用途。
    在已发布的病例报告上训练的嵌入在大多数临床任务上的表现与在较大语料库上训练的嵌入相当。开放获取语料库允许对临床相关的培训,有效,和可重复的嵌入。
    Quantify tradeoffs in performance, reproducibility, and resource demands across several strategies for developing clinically relevant word embeddings.
    We trained separate embeddings on all full-text manuscripts in the Pubmed Central (PMC) Open Access subset, case reports therein, the English Wikipedia corpus, the Medical Information Mart for Intensive Care (MIMIC) III dataset, and all notes in the University of Pennsylvania Health System (UPHS) electronic health record. We tested embeddings in six clinically relevant tasks including mortality prediction and de-identification, and assessed performance using the scaled Brier score (SBS) and the proportion of notes successfully de-identified, respectively.
    Embeddings from UPHS notes best predicted mortality (SBS 0.30, 95% CI 0.15 to 0.45) while Wikipedia embeddings performed worst (SBS 0.12, 95% CI -0.05 to 0.28). Wikipedia embeddings most consistently (78% of notes) and the full PMC corpus embeddings least consistently (48%) de-identified notes. Across all six tasks, the full PMC corpus demonstrated the most consistent performance, and the Wikipedia corpus the least. Corpus size ranged from 49 million tokens (PMC case reports) to 10 billion (UPHS).
    Embeddings trained on published case reports performed as least as well as embeddings trained on other corpora in most tasks, and clinical corpora consistently outperformed non-clinical corpora. No single corpus produced a strictly dominant set of embeddings across all tasks and so the optimal training corpus depends on intended use.
    Embeddings trained on published case reports performed comparably on most clinical tasks to embeddings trained on larger corpora. Open access corpora allow training of clinically relevant, effective, and reproducible embeddings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    UNASSIGNED: A growing volume of studies address methods for performing systematic reviews of qualitative studies. One such methodological aspect is the conceptual framework used to structure the review question and plan the search strategy for locating relevant studies. The purpose of this case study was to evaluate the retrieval potential of each element of conceptual frameworks in qualitative systematic reviews in the health sciences.
    UNASSIGNED: The presence of elements from conceptual frameworks in publication titles, abstracts, and controlled vocabulary in CINAHL and PubMed was analyzed using a set of qualitative reviews and their included studies as a gold standard. Using a sample of 101 publications, we determined whether particular publications could be retrieved if a specific element from the conceptual framework was used in the search strategy.
    UNASSIGNED: We found that the relative recall of conceptual framework elements varied considerably, with higher recall for patient/population (99%) and research type (97%) and lower recall for intervention/phenomenon of interest (74%), outcome (79%), and context (61%).
    UNASSIGNED: The use of patient/population and research type elements had high relative recall for qualitative studies. However, other elements should be used with great care due to lower relative recall.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号