FHIR

FHIR
  • 文章类型: Journal Article
    透明度和可追溯性对于建立可信赖的人工智能(AI)至关重要。数据准备过程中缺乏透明度是开发可靠的人工智能系统的一个重大障碍,这可能导致与可重复性相关的问题。调试AI模型,偏见和公平,以及合规和监管。我们引入了正式的数据准备管道规范,以改进AI和数据分析应用程序中使用的手动和容易出错的数据提取过程。注重可追溯性。
    我们提出了一种声明性语言来定义从遵循通用数据模型的健康数据中提取AI就绪数据集,特别是那些符合HL7快速医疗保健互操作性资源(FHIR)。我们利用FHIR分析来开发针对AI用例定制的通用数据模型,以实现所需信息的显式声明,例如表型和AI功能定义。在我们的管道模型中,我们转换复杂,通过定义目标人群,用不规则的时间序列采样到平坦结构的高维电子健康记录数据,功能组和最终数据集。我们的设计考虑了来自不同项目的各种AI用例的要求,这些用例导致实现许多表现出复杂的时间关系的特征类型。
    我们实现了一个可扩展的高性能功能存储库来执行数据准备管道定义。该软件不仅确保可靠,容错分布式处理,以生成AI就绪数据集及其元数据,包括许多统计数据,在在线预测期间,还可以作为基于训练好的AI模型的决策支持应用程序的可插拔组件,以自动准备各个实体的特征值。我们在三个不同的研究项目中部署并测试了拟议的方法和实施。我们将开发的FHIR配置文件作为一个通用数据模型,在数据准备管道中的特征组定义和特征定义,同时训练AI模型以“预测心脏手术后的并发症”。
    通过跨各种试点用例的实现,已经证明,我们的框架具有必要的广度和灵活性来定义各种特征,每个都是根据特定的时间和上下文标准定制的。
    UNASSIGNED: Transparency and traceability are essential for establishing trustworthy artificial intelligence (AI). The lack of transparency in the data preparation process is a significant obstacle in developing reliable AI systems which can lead to issues related to reproducibility, debugging AI models, bias and fairness, and compliance and regulation. We introduce a formal data preparation pipeline specification to improve upon the manual and error-prone data extraction processes used in AI and data analytics applications, with a focus on traceability.
    UNASSIGNED: We propose a declarative language to define the extraction of AI-ready datasets from health data adhering to a common data model, particularly those conforming to HL7 Fast Healthcare Interoperability Resources (FHIR). We utilize the FHIR profiling to develop a common data model tailored to an AI use case to enable the explicit declaration of the needed information such as phenotype and AI feature definitions. In our pipeline model, we convert complex, high-dimensional electronic health records data represented with irregular time series sampling to a flat structure by defining a target population, feature groups and final datasets. Our design considers the requirements of various AI use cases from different projects which lead to implementation of many feature types exhibiting intricate temporal relations.
    UNASSIGNED: We implement a scalable and high-performant feature repository to execute the data preparation pipeline definitions. This software not only ensures reliable, fault-tolerant distributed processing to produce AI-ready datasets and their metadata including many statistics alongside, but also serve as a pluggable component of a decision support application based on a trained AI model during online prediction to automatically prepare feature values of individual entities. We deployed and tested the proposed methodology and the implementation in three different research projects. We present the developed FHIR profiles as a common data model, feature group definitions and feature definitions within a data preparation pipeline while training an AI model for \"predicting complications after cardiac surgeries\".
    UNASSIGNED: Through the implementation across various pilot use cases, it has been demonstrated that our framework possesses the necessary breadth and flexibility to define a diverse array of features, each tailored to specific temporal and contextual criteria.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    释放用于临床研究的常规医疗数据的潜力需要分析来自多个医疗机构的数据。然而,根据德国数据保护条例,数据通常不能离开单个机构,需要分散的方法。分散化研究面临着协调方面的挑战,技术基础设施,互操作性和法规遵从性。罕见疾病是分散数据分析的重要原型研究重点,因为根据定义,患者是罕见的,只有合并来自多个地点的数据,才能达到足够的队列规模.
    在“罕见疾病合作”项目中,分散研究集中于四种罕见疾病(囊性纤维化,苯丙酮尿症,川崎病,儿童多系统炎症综合征)在17家德国大学医院进行。因此,分散研究的数据管理过程是由一个跨学科的医学专家团队开发的,公共卫生和数据科学。在这个过程中,总结和讨论了经验教训。
    该过程由八个步骤组成,其中包括用于定义医疗用例的子过程,脚本开发和数据管理。吸取的教训一方面包括研究的组织和管理(专家的合作,使用标准化表格和项目信息的发布),另一方面,脚本和分析的开发(对数据库的依赖,使用标准和开源工具,反馈回路,匿名化)。
    这项工作抓住了核心挑战并描述了可能的解决方案,因此可以作为实施和开展类似分散研究的坚实基础。
    UNASSIGNED: Unlocking the potential of routine medical data for clinical research requires the analysis of data from multiple healthcare institutions. However, according to German data protection regulations, data can often not leave the individual institutions and decentralized approaches are needed. Decentralized studies face challenges regarding coordination, technical infrastructure, interoperability and regulatory compliance. Rare diseases are an important prototype research focus for decentralized data analyses, as patients are rare by definition and adequate cohort sizes can only be reached if data from multiple sites is combined.
    UNASSIGNED: Within the project \"Collaboration on Rare Diseases\", decentralized studies focusing on four rare diseases (cystic fibrosis, phenylketonuria, Kawasaki disease, multisystem inflammatory syndrome in children) were conducted at 17 German university hospitals. Therefore, a data management process for decentralized studies was developed by an interdisciplinary team of experts from medicine, public health and data science. Along the process, lessons learned were formulated and discussed.
    UNASSIGNED: The process consists of eight steps and includes sub-processes for the definition of medical use cases, script development and data management. The lessons learned include on the one hand the organization and administration of the studies (collaboration of experts, use of standardized forms and publication of project information), and on the other hand the development of scripts and analysis (dependency on the database, use of standards and open source tools, feedback loops, anonymization).
    UNASSIGNED: This work captures central challenges and describes possible solutions and can hence serve as a solid basis for the implementation and conduction of similar decentralized studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    全球医疗保健系统中电子健康记录(EHR)的日益普及,突显了数据质量对临床决策和研究的重要性。尤其是在产科。高质量的数据对于准确表示患者人群和避免错误的医疗保健决策至关重要。然而,现有研究强调了EHR数据质量方面的重大挑战,需要创新的工具和方法来进行有效的数据质量评估和改进。
    本文通过开发一种新颖的工具来解决产科数据质量评估的关键需求。该工具利用健康等级7(HL7)快速医疗互操作资源(FHIR)标准,结合贝叶斯网络和专家规则,提供了一种新的方法来评估现实世界产科数据中的数据质量。
    一个专注于完整性的协调框架,合理性,和一致性支撑着我们的方法。我们采用贝叶斯网络进行高级概率建模,集成的离群点检测方法,以及基于特定领域知识的基于规则的系统。该工具的开发和验证基于9家葡萄牙医院的产科数据,跨越2019-2020年。
    开发的工具显示出识别产科EHR中数据质量问题的强大潜力。该工具中使用的贝叶斯网络显示出各种功能的高性能,接收器工作特征曲线下面积(AUROC)在75%至97%之间。该工具的基础结构和可互操作的格式作为FHIR应用程序编程接口(API),可以在产科设置中部署实时数据质量评估。我们最初的评估表明承诺,即使与医生对真实记录的评估相比,该工具可以达到88%的AUROC,取决于定义的阈值。
    我们的结果还表明,产科临床记录很难在质量方面进行评估,像我们这样的评估可能会受益于在质量差和质量好之间进行更分类的排名方法。
    这项研究为EHR数据质量评估领域做出了重要贡献,特别关注产科。HL7-FHIR互操作性的结合,机器学习技术,和专业知识提出了一个强大的,适应医疗数据质量挑战的解决方案。未来的研究应该针对不同的医疗保健环境探索量身定制的数据质量评估,以及对工具功能的进一步验证,增强工具在不同医疗领域的实用性。
    UNASSIGNED: The increasing prevalence of electronic health records (EHRs) in healthcare systems globally has underscored the importance of data quality for clinical decision-making and research, particularly in obstetrics. High-quality data is vital for an accurate representation of patient populations and to avoid erroneous healthcare decisions. However, existing studies have highlighted significant challenges in EHR data quality, necessitating innovative tools and methodologies for effective data quality assessment and improvement.
    UNASSIGNED: This article addresses the critical need for data quality evaluation in obstetrics by developing a novel tool. The tool utilizes Health Level 7 (HL7) Fast Healthcare Interoperable Resources (FHIR) standards in conjunction with Bayesian Networks and expert rules, offering a novel approach to assessing data quality in real-world obstetrics data.
    UNASSIGNED: A harmonized framework focusing on completeness, plausibility, and conformance underpins our methodology. We employed Bayesian networks for advanced probabilistic modeling, integrated outlier detection methods, and a rule-based system grounded in domain-specific knowledge. The development and validation of the tool were based on obstetrics data from 9 Portuguese hospitals, spanning the years 2019-2020.
    UNASSIGNED: The developed tool demonstrated strong potential for identifying data quality issues in obstetrics EHRs. Bayesian networks used in the tool showed high performance for various features with area under the receiver operating characteristic curve (AUROC) between 75% and 97%. The tool\'s infrastructure and interoperable format as a FHIR Application Programming Interface (API) enables a possible deployment of a real-time data quality assessment in obstetrics settings. Our initial assessments show promised, even when compared with physicians\' assessment of real records, the tool can reach AUROC of 88%, depending on the threshold defined.
    UNASSIGNED: Our results also show that obstetrics clinical records are difficult to assess in terms of quality and assessments like ours could benefit from more categorical approaches of ranking between bad and good quality.
    UNASSIGNED: This study contributes significantly to the field of EHR data quality assessment, with a specific focus on obstetrics. The combination of HL7-FHIR interoperability, machine learning techniques, and expert knowledge presents a robust, adaptable solution to the challenges of healthcare data quality. Future research should explore tailored data quality evaluations for different healthcare contexts, as well as further validation of the tool capabilities, enhancing the tool\'s utility across diverse medical domains.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:实时数据(RTD)是在创建后立即交付的数据。RTD的关键特征是低传递延迟。医疗保健中的信息系统对时间极为敏感,其组成部分是电子健康记录(EHR)。来自EHR的实时数据在支持决策方面发挥着重要作用,分析和协调护理。这在文献中很好地提到了,但是这个过程还没有描述,提供参考实现和测试。技术上可以使用几种方法实现实时数据传递。这项工作的目的是通过测量传输延迟来评估从EHR的RTD的不同传输方法的性能。
    方法:在我们的工作中,我们使用了四种方法从EHR传输RTD:REST钩子,WebSocket通知,反向代理和数据库触发器。我们部署了快速健康互操作性资源(FHIR)服务器,因为它是使用最广泛的EHR标准之一。对于参考实现,我们使用了Python和Golang。选择传递延迟作为性能度量,通过从EHR资源接收的时间戳中减去EHR资源创建的时间戳(以毫秒为单位)得出。数据采用描述性统计分析,累积分布函数(CDF),Kruskal-Wallis和事后测试。
    结果:数据库触发方法具有最佳的平均交付延迟13.52±5.56ms,反向代理14.43±4.58ms,REST挂钩19.26±5.76ms和WebSocket27.32±9.44ms。反向代理显示更严格的值范围和更低的可变性。所有方法对之间的延迟存在显着差异,除了反向代理和数据库触发器。
    结论:实时数据传输对于开发强大而创新的医疗保健应用至关重要。作为数据源的当前EHR系统的属性预定义了传输方法。在我们的工作中,首次对具有参考实现的EHR的RTD传输性能进行了测量和评估。我们发现数据库触发器实现了最低的交付延迟。反向代理执行速度稍慢,但提供了更多的稳定性,其次是REST挂钩和WebSocket通知。
    OBJECTIVE: Real-time data (RTD) are data that are delivered immediately after creation. The key feature of RTD is low delivery latency. Information systems in health care are extremely time-sensitive and their building block is the electronic health record (EHR). Real-time data from EHRs play an important role to support decision-making, analytics and coordination of care. This is well mentioned in the literature, but the process has not yet been described, providing reference implementations and testing. Real-time data delivery can technically be achieved using several methods. The objective of this work is to evaluate the performance of different transfer methods of RTD from EHRs by measuring delivery latency.
    METHODS: In our work we used four approaches to transfer RTD from EHRs: REST hooks, WebSocket notifications, reverse proxy and database triggers. We deployed a Fast Health Interoperability Resources (FHIR) server as it is one of the most widely used EHR standard. For the reference implementations we used Python and Golang. Delivery latency was selected as performance metric, derived by subtracting the timestamp of the EHR resource creation from the timestamp of the EHR resource receipt in millisecond. The data was analyzed using descriptive statistics, cumulative distribution function (CDF), Kruskal-Wallis and post-hoc tests.
    RESULTS: The database trigger approach had the best mean delivery latency 13.52±5.56 ms, followed by the reverse proxy 14.43±4.58 ms, REST hooks 19.26±5.76 ms and WebSocket 27.32±9.44 ms. The reverse proxy showed a tighter range of the values and lower variability. There were significant differences in the latencies between all pairs of approaches, except for reverse proxy and database trigger.
    CONCLUSIONS: Real-time data transfer is vital for the development of robust and innovative healthcare applications. Properties of current EHR systems as a data source predefine the approaches for transfer. In our work for the first time the performance of RTD transfer from the EHRs with reference implementations is measured and evaluated. We found that database triggers achieve lowest delivery latency. Reverse proxy performed slightly slower, but offered more stability, followed by REST hooks and WebSocket notifications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:在德国和国际研究网络中,应用了关于患者同意的不同方法。到目前为止,找出来自这些网络的数据可以在多大程度上用于特定的研究项目是耗时的。为了使同意书的内容可查询,我们的目标是基于权限的方法(选择加入),可以映射许可和撤回同意内容,并使其可查询超出项目边界。
    方法:从方法和可重用性方面分析了研究的现状。在下一步中抽象了用于定义同意策略的选定过程模型。在此基础上,制定了用于描述同意政策的标准化语义术语,并最初与专家达成了共识。在最后一步,对产生的代码进行了不同方面的适用性评估。
    结果:基于3轴的语义同意代码(SCC)的第一个可扩展版本(CLASS,ACTION,目的)被开发,合并并出版。以大型国家研究协会(医学信息学计划和NUMNAPKON/NUKLEUS)的真实同意为例,说明了SCC实现的附加值。根据经过简短培训的人员对同意进行手动语义映射以及根据SCC对同意策略的自动可解释性,成功地评估了SCC的适用性(反之亦然)。此外,提出了在异构研究场景中使用SCC简化同意书查询的概念.
    结论:语义同意代码已经成功地进行了初步评估。由于已发布的3轴代码,SCC是标准化最初不同的同意文本和内容的基本初步工作,并且可以在内容和技术添加方面以多种方式迭代扩展。它应该与潜在用户社区合作扩展。
    BACKGROUND: In German and international research networks different approaches concerning patient consent are applied. So far it is time-consuming to find out to what extent data from these networks can be used for a specific research project. To make the contents of the consents queryable, we aimed for a permission-based approach (Opt-In) that can map both the permission and the withdrawal of consent contents as well as make it queryable beyond project boundaries.
    METHODS: The current state of research was analysed in terms of approach and reusability. Selected process models for defining consent policies were abstracted in a next step. On this basis, a standardised semantic terminology for the description of consent policies was developed and initially agreed with experts. In a final step, the resulting code was evaluated with regards to different aspects of applicability.
    RESULTS: A first and extendable version for a Semantic Consent Code (SCC) based on 3-axis (CLASS, ACTION, PURPOSE) was developed, consolidated und published. The added value achieved by the SCC was illustrated using the example of real consents from large national research associations (Medical Informatics Initiative and NUM NAPKON/NUKLEUS). The applicability of the SCC was successfully evaluated in terms of the manual semantic mapping of consents by briefly trained personnel and the automated interpretability of consent policies according to the SCC (and vice versa). In addition, a concept for the use of the SCC to simplify consent queries in heterogeneous research scenarios was presented.
    CONCLUSIONS: The Semantic Consent Code has already successfully undergone initial evaluations. As the published 3-axis code SCC is an essential preliminary work to standardising initially diverse consent texts and contents and can iteratively be extended in multiple ways in terms of content and technical additions. It should be extended in cooperation with the potential user community.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:循证医学(EBM)具有改善健康结果的潜力,但是EBM尚未广泛集成到用于研究或临床决策的系统中。没有一个可扩展和可重用的计算机可读标准来分发研究结果和在创作者之间合成的证据,实施者,以及证据的最终使用者.更快速更新的证据,合成,传播,和实施将改善EBM和循证医疗保健政策的交付。
    目的:本研究旨在介绍快速医疗互操作性资源(FHIR)项目(EBMonFHIR)的EBM,它正在扩展七级(HL7)FHIR的方法和基础设施,为与健康相关的科学知识的电子交换提供互操作性标准。
    方法:作为一个持续的过程,该项目创建和完善FHIR资源,以代表临床研究和综合这些研究的证据,并开发工具来帮助创建和可视化FHIR资源。
    结果:EBMonFHIR项目创建了FHIR资源(即,ArtifactAssessment,引文,证据,证据报告,和EvidenceVariable)用于表示证据。COVID-19知识加速器(COKA)项目,现在健康证据知识加速器(HEVKA),进一步开展这项工作,创建了表达证据报告的FHIR资源,引文,和ArtifactAssessment概念。该小组是(1)不断完善FHIR资源以支持EBM的表示;(2)开发与EBM相关的受控术语(即,研究设计,统计类型,统计模型,和偏差风险);以及(3)开发工具,以促进将EBM信息可视化和数据输入到FHIR资源中,包括人类可读的界面和JSON查看器。
    结论:EBMonFHIR资源与其他FHIR资源结合可以支持中继EBM组件,其方式可互操作,并可由下游工具和健康信息技术系统使用,以支持证据用户。
    BACKGROUND: Evidence-based medicine (EBM) has the potential to improve health outcomes, but EBM has not been widely integrated into the systems used for research or clinical decision-making. There has not been a scalable and reusable computer-readable standard for distributing research results and synthesized evidence among creators, implementers, and the ultimate users of that evidence. Evidence that is more rapidly updated, synthesized, disseminated, and implemented would improve both the delivery of EBM and evidence-based health care policy.
    OBJECTIVE: This study aimed to introduce the EBM on Fast Healthcare Interoperability Resources (FHIR) project (EBMonFHIR), which is extending the methods and infrastructure of Health Level Seven (HL7) FHIR to provide an interoperability standard for the electronic exchange of health-related scientific knowledge.
    METHODS: As an ongoing process, the project creates and refines FHIR resources to represent evidence from clinical studies and syntheses of those studies and develops tools to assist with the creation and visualization of FHIR resources.
    RESULTS: The EBMonFHIR project created FHIR resources (ie, ArtifactAssessment, Citation, Evidence, EvidenceReport, and EvidenceVariable) for representing evidence. The COVID-19 Knowledge Accelerator (COKA) project, now Health Evidence Knowledge Accelerator (HEvKA), took this work further and created FHIR resources that express EvidenceReport, Citation, and ArtifactAssessment concepts. The group is (1) continually refining FHIR resources to support the representation of EBM; (2) developing controlled terminology related to EBM (ie, study design, statistic type, statistical model, and risk of bias); and (3) developing tools to facilitate the visualization and data entry of EBM information into FHIR resources, including human-readable interfaces and JSON viewers.
    CONCLUSIONS: EBMonFHIR resources in conjunction with other FHIR resources can support relaying EBM components in a manner that is interoperable and consumable by downstream tools and health information technology systems to support the users of evidence.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:精确的公共卫生(PPH)可以通过以时间为目标的监视和干预措施来最大化影响,空间,和流行病学特征。尽管快速诊断测试(RDT)在低资源环境中实现了无处不在的即时测试,他们的影响小于预期,部分原因是缺乏简化数据捕获和分析的功能。
    目的:我们旨在通过定义信息和数据公理以及信息利用指数(IUI)将RDT转变为PPH工具;确定设计功能以最大化IUI;并为模块化RDT功能制定开放指南(OGs),使其与数字健康工具链接以创建RDT-OG系统。
    方法:我们审查了已发表的论文,并与技术领域的专家或RDT用户进行了调查,制造,和部署来定义信息利用的特征和公理。我们开发了一个IUI,从0%到100%,并为33个世界卫生组织资格预审的RDT计算了该指数。开发RDT-OG规格是为了最大限度地提高IUI;通过开发基于OGs的疟疾和COVID-19RDT,在肯尼亚和印度尼西亚使用,评估了可行性和规格。
    结果:调查受访者(n=33)包括16名研究人员,7位技术专家,3家制造商,2名医生或护士,其他5个用户他们最关心RDT的正确使用(30/33,91%),他们的解释(28/33,85%),和可靠性(26/33,79%),并相信基于智能手机的RDT阅读器可以解决一些可靠性问题(28/33,85%),读者对复杂或多重RDT更为重要(33/33,100%)。资格预审的RDT的IUI范围为13%至75%(中位数33%)。相比之下,RDT-OG原型的IUI为91%。通过(1)创建参考RDT-OG原型;(2)在智能手机RDT阅读器上实现其功能和功能,云信息系统,和快速医疗互操作性资源;以及(3)分析RDT-OG与实验室集成的潜在公共卫生影响,监视,和生命统计系统。
    结论:政策制定者和制造商可以定义,采用,并与RDT-OG和数字健康计划协同。RDT-OG方法可以通过适应性干预措施进行实时诊断和流行病学监测,以促进通过PPH控制或消除当前和新出现的疾病。
    BACKGROUND: Precision public health (PPH) can maximize impact by targeting surveillance and interventions by temporal, spatial, and epidemiological characteristics. Although rapid diagnostic tests (RDTs) have enabled ubiquitous point-of-care testing in low-resource settings, their impact has been less than anticipated, owing in part to lack of features to streamline data capture and analysis.
    OBJECTIVE: We aimed to transform the RDT into a tool for PPH by defining information and data axioms and an information utilization index (IUI); identifying design features to maximize the IUI; and producing open guidelines (OGs) for modular RDT features that enable links with digital health tools to create an RDT-OG system.
    METHODS: We reviewed published papers and conducted a survey with experts or users of RDTs in the sectors of technology, manufacturing, and deployment to define features and axioms for information utilization. We developed an IUI, ranging from 0% to 100%, and calculated this index for 33 World Health Organization-prequalified RDTs. RDT-OG specifications were developed to maximize the IUI; the feasibility and specifications were assessed through developing malaria and COVID-19 RDTs based on OGs for use in Kenya and Indonesia.
    RESULTS: The survey respondents (n=33) included 16 researchers, 7 technologists, 3 manufacturers, 2 doctors or nurses, and 5 other users. They were most concerned about the proper use of RDTs (30/33, 91%), their interpretation (28/33, 85%), and reliability (26/33, 79%), and were confident that smartphone-based RDT readers could address some reliability concerns (28/33, 85%), and that readers were more important for complex or multiplex RDTs (33/33, 100%). The IUI of prequalified RDTs ranged from 13% to 75% (median 33%). In contrast, the IUI for an RDT-OG prototype was 91%. The RDT open guideline system that was developed was shown to be feasible by (1) creating a reference RDT-OG prototype; (2) implementing its features and capabilities on a smartphone RDT reader, cloud information system, and Fast Healthcare Interoperability Resources; and (3) analyzing the potential public health impact of RDT-OG integration with laboratory, surveillance, and vital statistics systems.
    CONCLUSIONS: Policy makers and manufacturers can define, adopt, and synergize with RDT-OGs and digital health initiatives. The RDT-OG approach could enable real-time diagnostic and epidemiological monitoring with adaptive interventions to facilitate control or elimination of current and emerging diseases through PPH.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:有必要协调和标准化临床研究病例报告表(CRF)中使用的数据变量,以促进在多个临床研究中收集的患者数据的合并和共享。对于专注于传染病的临床研究尤其如此。公共卫生可能高度依赖于这些研究的结果。因此,有一种更高的紧迫性来产生有意义的,可靠的见解,理想情况下基于高样本数量和质量数据。核心数据元素的实施和互操作性标准的合并可以促进统一的临床数据集的创建。
    目的:本研究的目的是比较,协调,并标准化变量,这些变量集中在6项国际传染病临床研究中用作CRF一部分的诊断测试中,最终,然后为正在进行的和未来的研究提供全研究通用数据元素(CDE),以促进跨试验收集数据的互操作性和可比性.
    方法:为了确定CDE,我们回顾并比较了包含在所有6项传染病研究中和所有研究中用于数据收集的CRF的元数据。我们检查了医学系统化命名法-临床术语中国际语义标准代码的可用性,国家癌症研究所词库,和逻辑观察标识符名称和代码系统,用于明确表示构成CDE的诊断测试信息。然后,我们提出了2个数据模型,这些模型结合了已识别的CDE的语义和句法标准。
    结果:在分析范围内考虑的216个变量中,我们确定了11个CDE来描述诊断测试(特别是,血清学和测序)用于传染病:病毒谱系/进化枝;测试日期,type,表演者,和制造商;目标基因;定量和定性结果;和样本标识符,type,和收集日期。
    结论:确定用于感染性疾病的CDE是促进整个临床研究中数据子集的交换和可能合并的第一步(并且,大型研究项目),以进行可能的共享分析,以增加发现的力量。为了互操作性,临床研究数据的协调和标准化路径可以以两种方式铺就。首先,映射到标准术语确保每个数据元素的(变量)定义是明确的,并且它有一个,跨研究的独特解释。第二,这些数据的交换是通过以标准交换格式“包装”来辅助的,如快速医疗保健互操作性资源或临床数据交换标准联盟的临床数据采集标准协调模型。
    It is necessary to harmonize and standardize data variables used in case report forms (CRFs) of clinical studies to facilitate the merging and sharing of the collected patient data across several clinical studies. This is particularly true for clinical studies that focus on infectious diseases. Public health may be highly dependent on the findings of such studies. Hence, there is an elevated urgency to generate meaningful, reliable insights, ideally based on a high sample number and quality data. The implementation of core data elements and the incorporation of interoperability standards can facilitate the creation of harmonized clinical data sets.
    This study\'s objective was to compare, harmonize, and standardize variables focused on diagnostic tests used as part of CRFs in 6 international clinical studies of infectious diseases in order to, ultimately, then make available the panstudy common data elements (CDEs) for ongoing and future studies to foster interoperability and comparability of collected data across trials.
    We reviewed and compared the metadata that comprised the CRFs used for data collection in and across all 6 infectious disease studies under consideration in order to identify CDEs. We examined the availability of international semantic standard codes within the Systemized Nomenclature of Medicine - Clinical Terms, the National Cancer Institute Thesaurus, and the Logical Observation Identifiers Names and Codes system for the unambiguous representation of diagnostic testing information that makes up the CDEs. We then proposed 2 data models that incorporate semantic and syntactic standards for the identified CDEs.
    Of 216 variables that were considered in the scope of the analysis, we identified 11 CDEs to describe diagnostic tests (in particular, serology and sequencing) for infectious diseases: viral lineage/clade; test date, type, performer, and manufacturer; target gene; quantitative and qualitative results; and specimen identifier, type, and collection date.
    The identification of CDEs for infectious diseases is the first step in facilitating the exchange and possible merging of a subset of data across clinical studies (and with that, large research projects) for possible shared analysis to increase the power of findings. The path to harmonization and standardization of clinical study data in the interest of interoperability can be paved in 2 ways. First, a map to standard terminologies ensures that each data element\'s (variable\'s) definition is unambiguous and that it has a single, unique interpretation across studies. Second, the exchange of these data is assisted by \"wrapping\" them in a standard exchange format, such as Fast Health care Interoperability Resources or the Clinical Data Interchange Standards Consortium\'s Clinical Data Acquisition Standards Harmonization Model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • DOI:
    文章类型: Journal Article
    HL7FHIR创建于近十年前,在高收入环境中的使用越来越广泛。尽管在低收入和中等收入(LMIC)环境中进行了一些初步工作,但直到最近才产生影响。随着EHR的大规模部署,对LMICs中卫生信息系统之间可靠且易于实施的互操作性的需求正在增长,国家报告系统和移动健康应用。OpenMRS开源EHR已部署在超过44个LMIC中,与其他HIS的互操作性需求不断增加。我们在这里描述了支持最新标准的新FHIR模块的开发和部署,以及它在与实验室系统的互操作性中的使用。mHealth应用程序,药房配药系统,并作为支持高级用户界面设计的工具。我们还展示了它如何促进日期科学项目以及在LMIC中部署基于机器学习的CDSS和精密医学。
    HL7 FHIR was created almost a decade ago and is seeing increasingly wide use in high income settings. Although some initial work was carried out in low and middle income (LMIC) settings there has been little impact until recently. The need for reliable and easy to implement interoperability between health information systems in LMICs is growing with large scale deployments of EHRs, national reporting systems and mHealth applications. The OpenMRS open source EHR has been deployed in more than 44 LMIC with increasing needs for interoperability with other HIS. We describe here the development and deployment of a new FHIR module supporting the latest standards and its use in interoperability with laboratory systems, mHealth applications, pharmacy dispensing system and as a tool for supporting advanced user interface designs. We also show how it facilitates date science projects and deployment of machine leaning based CDSS and precision medicine in LMICs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    护理模式医院@Home在家中提供医院级别的治疗,旨在减轻医院的紧张和提高病人的舒适度。尽管有潜力,将数字健康解决方案集成到这种护理模式中仍然有限。本文提出了一种概念,用于将护理点(POC)的实验室测试集成到医院@Home模型中,以提高效率和互操作性。
    方法:使用HL7FHIR标准和云基础架构,我们提出了直接传输POC收集的实验室数据的概念。要求来自文献和与POC测试设备生产商的讨论。基于这些要求开发了用于数据交换的体系结构。
    结果:我们的概念允许访问在POC收集的实验室数据,促进有效的数据传输和增强互操作性。一个假设的场景证明了这个概念的可行性和好处,在Hospital@Home环境中展示改进的患者护理和简化的流程。
    结论:使用HL7FHIR标准和云基础架构将POC数据集成到Hospital@Home模型中,可以增强患者护理并简化流程。解决数据安全和隐私等挑战对于其成功实施至关重要。
    The care model Hospital@Home offers hospital-level treatment at home, aiming to alleviate hospital strain and enhance patient comfort. Despite its potential, integrating digital health solutions into this care model still remains limited. This paper proposes a concept for integrating laboratory testing at the Point of Care (POC) into Hospital@Home models to improve efficiency and interoperability.
    METHODS: Using the HL7 FHIR standard and cloud infrastructure, we developed a concept for direct transmission of laboratory data collected at POC. Requirements were derived from literature and discussions with a POC testing device producer. An architecture for data exchange was developed based on these requirements.
    RESULTS: Our concept enables access to laboratory data collected at POC, facilitating efficient data transfer and enhancing interoperability. A hypothetical scenario demonstrates the concept\'s feasibility and benefits, showcasing improved patient care and streamlined processes in Hospital@Home settings.
    CONCLUSIONS: Integration of POC data into Hospital@Home models using the HL7 FHIR standard and cloud infrastructure offers potential to enhance patient care and streamline processes. Addressing challenges such as data security and privacy is crucial for its successful implementation into practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号