R

骨坏死
  • 文章类型: Journal Article
    脆弱患者中与医疗保健相关的COVID-19导致不成比例的发病率和死亡率。早期药物干预可能会减少阴性后遗症并改善此类环境中的生存率。这项研究旨在描述接受早期短程雷米西韦治疗的医疗保健相关COVID-19患者的预后。我们回顾了在日本一家非急性护理医院的两个病房爆发期间发生COVID-19的住院患者的特征和结果,并接受了短期的remdesivir。49名患者被诊断为COVID-19,34名患者在综合住院康复病房,15名患者在姑息治疗和内科综合病房。47人出现症状,其中46人接受了remdesivir。在接受治疗的人中,中位年龄为75岁,中位Charlson合并症指数为6。41名患者接受了一到两剂mRNA疫苗,而没有人接受过第三剂。大多数患者接受3天的remdesivir。在发病后14天和28天的患者中,41/44(95.3%)和35/41(85.4%)存活,分别。在28天内,姑息治疗/内科病房发生了6例死亡,其中2例可能与COVID-19有关。在那些幸存下来的人中,从发病时间到28天的表现状态没有变化.
    Healthcare-associated COVID-19 among vulnerable patients leads to disproportionate morbidity and mortality. Early pharmacologic intervention may reduce negative sequelae and improve survival in such settings. This study aimed to describe outcome of patients with healthcare-associated COVID-19 who received early short-course remdesivir therapy. We reviewed the characteristics and outcome of hospitalized patients who developed COVID-19 during an outbreak that involved two wards at a non-acute care hospital in Japan and received short-course remdesivir. Forty-nine patients were diagnosed with COVID-19, 34 on a comprehensive inpatient rehabilitation ward and 15 on a combined palliative care and internal medicine ward. Forty-seven were symptomatic and 46 of them received remdesivir. The median age was 75, and the median Charlson comorbidity index was 6 among those who received it. Forty-one patients had received one or two doses of mRNA vaccines, while none had received a third dose. Most patients received 3 days of remdesivir. Of the patients followed up to 14 and 28 days from onset, 41/44 (95.3%) and 35/41(85.4%) were alive, respectively. Six deaths occurred by 28 days in the palliative care/internal medicine ward and two of them were possibly related to COVID-19. Among those who survived, the performance status was unchanged between the time of onset and at 28 days.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    共享资源实验室(SRL)提供仪器,培训,并支持调查人员,并在科学的进步和发展中发挥重要作用。为了方便日常工作并提供有效的服务,我们使用了计算机脚本;顺序处理的计算机命令列表,在我们的流式细胞仪设施中自动执行任务。使用Python和应用程序编程接口(API),我们自动用户沟通,并产生每日时间表显示屏幕。我们利用开放标准的可访问性,使用R和Python来分析和备份BDInflux细胞分类器中的数据。最后,我们展示了通过简单的脚本,我们可以通过从BeckmanCoulterXDP单元排序器生成排序统计信息来增加现有服务的价值。有了这五个例子,我们展示并希望激励其他SRL使用脚本有助于提高工作效率,可以解决问题,并可以增强SRL提供的服务。©2019作者Wiley期刊出版的细胞计量学第一部分,公司代表国际细胞计数促进学会。
    Shared resource laboratories (SRLs) offer instrumentation, training, and support to investigators and play an important role in the progress and development of science. To facilitate daily tasks and to provide an effective service, we have made use of computer scripts; a list of computer commands that are processed sequentially, to automate tasks in our flow cytometry facility. Using Python and an application programming interface (API), we automate user communication and produce a daily schedule display screen. We exploit the accessible nature of open standards to use R and Python to analyze and backup data from the BD Influx cell sorter. Finally, we show that through simple scripting, we can add value to an existing service by producing sort statistics from the Beckman Coulter XDP cell sorter. With these five examples, we demonstrate and wish to inspire other SRLs that the use of scripts helps to improve work efficiency, can solve problems, and can enhance the service provided by the SRL. © 2019 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    The MultiSCED web application has been developed to assist applied researchers in behavioral sciences to apply multilevel modeling to quantitatively summarize single-case experimental design (SCED) studies through a user-friendly point-and-click interface embedded within R. In this paper, we offer a brief introduction to the application, explaining how to define and estimate the relevant multilevel models and how to interpret the results numerically and graphically. The use of the application is illustrated through a re-analysis of an existing meta-analytic dataset. By guiding applied researchers through MultiSCED, we aim to make use of the multilevel modeling technique for combining SCED data across cases and across studies more comprehensible and accessible.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    Data heterogeneity is a common phenomenon related to the secondary use of electronic health records (EHR) data from different sources. The Observational Health Data Sciences and Informatics (OHDSI) Common Data Model (CDM) organizes healthcare data into standard data structures using concepts that are explicitly and formally specified through standard vocabularies, thereby facilitating large-scale analysis. The objective of this study is to design, develop, and evaluate generic survival analysis routines built using the OHDSI CDM.
    We used intrahepatic cholangiocarcinoma (ICC) patient data to implement CDM-based survival analysis methods. Our methods comprise the following modules: 1) Mapping local terms to standard OHDSI concepts. The analytical expression of variables and values related to demographic characteristics, medical history, smoking status, laboratory results, and tumor feature data. These data were mapped to standard OHDSI concepts through a manual analysis; 2) Loading patient data into the CDM using the concept mappings; 3) Developing an R interface that supports the portable survival analysis on top of OHDSI CDM, and comparing the CDM-based analysis results with those using traditional statistical analysis methods.
    Our dataset contained 346 patients diagnosed with ICC. The collected clinical data contains 115 variables, of which 75 variables were mapped to the OHDSI concepts. These concepts mainly belong to four domains: condition, observation, measurement, and procedure. The corresponding standard concepts are scattered in six vocabularies: ICD10CM, ICD10PCS, SNOMED, LOINC, NDFRT, and READ. We loaded a total of 25,950 patient data records into the OHDSI CDM database. However, 40 variables failed to map to the OHDSI CDM as they mostly belong to imaging data and pathological data.
    Our study demonstrates that conducting survival analysis using the OHDSI CDM is feasible and can produce reusable analysis routines. However, challenges to be overcome include 1) semantic loss caused by inaccurate mapping and value normalization; 2) incomplete OHDSI vocabularies describing imaging data, pathological data, and modular data representation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    不确定性分析是模型应用的重要前提。然而,现有的磷(P)损失指数或指标很少被评估。本研究应用广义似然不确定性估计(GLUE)方法来评估用R语言构造的非点源(NPS)P指标的参数和建模输出的不确定性。并检测了主观选择似然公式和GLUE可接受性阈值对模型输出的影响。结果表明如下。(1)参数RegR2,RegSDR2,PlossDPfer,PlossDPman,DPDR,和DPR对整体TP模拟高度敏感,它们的值范围可以通过GLUE减小。(2)纳什效率似然(L1)似乎比指数函数(L2)具有更好的强调高似然值模拟的能力。(3)在减少不确定性带宽和确保整个模型输出的拟合优度方面,集成多个输出准则的组合似然在模型不确定性评估中的作用优于单个似然。(4)0.55的值似乎是阈值的适度选择,以平衡高建模效率和高包围效率之间的利益。这项研究的结果可以提供(1)在一个单一的计算机平台下进行NPS建模的选择,(2)对类似地区NPS模型开发参数设置的重要参考,(3)根据研究兴趣,对GLUE方法在不同侧重点研究中的应用提出有用的建议,(4)对类似地区流域P管理的重要见解。
    Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR2, RegSDR2, PlossDPfer, PlossDPman, DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L1) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L2) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    BACKGROUND: Different methods have been described for data extraction from pathology reports with varying degrees of success. Here a technique for directly extracting data from relational database is described.
    METHODS: Our department uses synoptic reports modified from College of American Pathologists (CAP) Cancer Protocol Templates to report most of our cancer diagnoses. Choosing the melanoma of skin synoptic report as an example, R scripting language extended with RODBC package was used to query the pathology information system database. Reports containing melanoma of skin synoptic report in the past 4 and a half years were retrieved and individual data elements were extracted. Using the retrieved list of the cases, the database was queried a second time to retrieve/extract the lymph node staging information in the subsequent reports from the same patients.
    RESULTS: 426 synoptic reports corresponding to unique lesions of melanoma of skin were retrieved, and data elements of interest were extracted into an R data frame. The distribution of Breslow depth of melanomas grouped by year is used as an example of intra-report data extraction and analysis. When the new pN staging information was present in the subsequent reports, 82% (77/94) was precisely retrieved (pN0, pN1, pN2 and pN3). Additional 15% (14/94) was retrieved with certain ambiguity (positive or knowing there was an update). The specificity was 100% for both. The relationship between Breslow depth and lymph node status was graphed as an example of lesion-specific multi-report data extraction and analysis.
    CONCLUSIONS: R extended with RODBC package is a simple and versatile approach well-suited for the above tasks. The success or failure of the retrieval and extraction depended largely on whether the reports were formatted and whether the contents of the elements were consistently phrased. This approach can be easily modified and adopted for other pathology information systems that use relational database for data management.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Editorial
    Case-crossover design is a variation of case-control design that it employs persons\' history periods as controls. Case-crossover design can be viewed as the hybrid of case-control study and crossover design. Characteristic confounding that is constant within one person can be well controlled with this method. The relative risk and odds ratio, as well as their 95% confidence intervals (CIs), can be estimated using Cochran-Mantel-Haenszel method. R codes for the calculation are provided in the main text. Readers may adapt these codes to their own task. Conditional logistic regression model is another way to estimate odds ratio of the exposure. Furthermore, it allows for incorporation of other time-varying covariates that are not constant within subjects. The model fitting per se is not technically difficult because there is well developed statistical package. However, it is challenging to convert original dataset obtained from case report form to that suitable to be passed to clogit() function. R code for this task is provided and explained in the text.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA), the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a simulation study we demonstrate the dominance of hierarchical models over rANOVA. In addition, we show the lack of efficiency of the Mauchly\'s sphericity test in sample sizes typical for generalization research, and confirm how violations of sphericity increase the probability of type I errors. A worked example of a hierarchical model is provided, with a specific emphasis on the interpretation of parameters relevant for generalization research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Results of pharmacometric analyses influence high-level decisions such as clinical trial design, drug approval, and labeling. Key challenges for timely delivery of pharmacometric analyses are the data assembly process and tracking and documenting the modeling process and results. Since clinical efficacy and safety data typically reside in the biostatistics computing area, an integrated computing platform for pharmacometric and biostatistical analyses would be ideal. A case study is presented integrating a pharmacometric modeling platform into an existing statistical computing environment (SCE). The feasibility and specific configurations of running common PK/PD programs such as NONMEM and R inside of the SCE are provided. The case study provides an example of an integrated repository that facilitates efficient data assembly for pharmacometrics analyses. The proposed platform encourages a good pharmacometrics working practice to maintain transparency, traceability, and reproducibility of PK/PD models and associated data in supporting drug development and regulatory decisions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Case Reports
    Large public repositories of microarray experiments offer an abundance of biological data. It is of interest to use and to combine the available material to create new biological information and to develop a broader view on biological phenomena.Meta-analyses recombine similar information over a series of experiments to sketch scientific aspects which were not accessible by each of the single experiments. Meta-analysis of high-throughput experiments has to handle methodological as well as technical challenges. Methodological aspects concern the identification of homogeneous material which can be combined by appropriate statistical procedures. Technical challenges come from the data management of large amounts of high-dimensional data, long computation time, as well as the handling of the stored phenotype data.This paper compares in a meta-analysis of a large series of microarray experiments the interaction structure within selected pathways between different tumour entities. The feasibility of such a study is explored and a technical as well as a statistical framework for its completion is presented. Multiple obstacles were met during completion of this project. They are mainly related to the quality of the available data and influence the biological interpretation of the results derived.The sobering experience of our study asks for combined efforts to improve the data quality in public repositories of high-throughput data. The exploration of the available data in large meta-analyses is limited by incomplete documentation of essential aspects of experiments and studies, by technical deficiencies in the data stored, and by careless duplications of data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号