medical AI

医疗 AI
  • 文章类型: Journal Article
    本文对台湾和欧盟(EU)的二次使用健康数据的数据治理机制进行了比较分析。这两个地区都采用了独特的方法和法规来利用初级保健以外的健康数据,涵盖医学研究和医疗保健系统增强等领域。通过对这些模型的检验,这项研究旨在阐明策略,框架,以及台湾和欧盟采用的法律结构,在数据驱动的医疗保健创新的必要性和保护个人隐私权之间达成微妙的平衡。本文研究并比较了台湾和欧盟二次使用健康数据的几个关键方面。这些方面包括数据治理框架,法律和监管框架,数据访问和共享机制,以及隐私和安全方面的考虑。这种比较探索为不断发展的全球卫生数据治理格局提供了宝贵的见解。它提供了对这些地区实施的战略的更深入的了解,以利用健康数据的潜力,同时坚持围绕其二次使用的道德和法律考虑。调查结果旨在为负责任和有效利用健康数据提供最佳实践,特别是在医疗人工智能应用的背景下。
    This paper conducts a comparative analysis of data governance mechanisms concerning the secondary use of health data in Taiwan and the European Union (EU). Both regions have adopted distinctive approaches and regulations for utilizing health data beyond primary care, encompassing areas such as medical research and healthcare system enhancement. Through an examination of these models, this study seeks to elucidate the strategies, frameworks, and legal structures employed by Taiwan and the EU to strike a delicate balance between the imperative of data-driven healthcare innovation and the safeguarding of individual privacy rights. This paper examines and compares several key aspects of the secondary use of health data in Taiwan and the EU. These aspects include data governance frameworks, legal and regulatory frameworks, data access and sharing mechanisms, and privacy and security considerations. This comparative exploration offers invaluable insights into the evolving global landscape of health data governance. It provides a deeper understanding of the strategies implemented by these regions to harness the potential of health data while upholding the ethical and legal considerations surrounding its secondary use. The findings aim to inform best practices for responsible and effective health data utilization, particularly in the context of medical AI applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Editorial
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    值得信赖的医疗AI要求对底层算法的开发和测试具有透明度,以识别偏见并传达潜在的危害风险。关于如何实现医疗AI产品的透明度,存在丰富的指导,但目前尚不清楚公开信息是否足以说明其风险。为了评估这一点,我们从供应商网站上检索了欧盟IIb风险类别的14种可用CE认证的基于AI的放射学产品的公共文档,科学出版物,和欧洲EUDAMED数据库。使用自行设计的调查,我们报道了他们的发展,验证,伦理考虑,和部署注意事项,根据值得信赖的AI指南。我们用0、0.5或1对每个问题进行评分,以评估所需信息是否为“不可用”,\"部分可用,\"或\"完全可用。“每个产品的透明度是相对于所有55个问题计算的。透明度得分从6.4%到60.9%不等,中位数为29.1%。主要的透明度差距包括缺少关于培训数据的文件,伦理考虑,和部署限制。伦理方面,如同意,安全监测,和GDPR合规性很少被记录。此外,针对不同人口统计和医疗环境的部署警告很少。总之,欧洲授权医疗人工智能产品的公共文件缺乏足够的公共透明度来告知安全和风险。我们呼吁立法者和监管机构制定法律规定的公共和实质性透明度要求,以履行值得信赖的人工智能健康承诺。
    Trustworthy medical AI requires transparency about the development and testing of underlying algorithms to identify biases and communicate potential risks of harm. Abundant guidance exists on how to achieve transparency for medical AI products, but it is unclear whether publicly available information adequately informs about their risks. To assess this, we retrieved public documentation on the 14 available CE-certified AI-based radiology products of the II b risk category in the EU from vendor websites, scientific publications, and the European EUDAMED database. Using a self-designed survey, we reported on their development, validation, ethical considerations, and deployment caveats, according to trustworthy AI guidelines. We scored each question with either 0, 0.5, or 1, to rate if the required information was \"unavailable\", \"partially available,\" or \"fully available.\" The transparency of each product was calculated relative to all 55 questions. Transparency scores ranged from 6.4% to 60.9%, with a median of 29.1%. Major transparency gaps included missing documentation on training data, ethical considerations, and limitations for deployment. Ethical aspects like consent, safety monitoring, and GDPR-compliance were rarely documented. Furthermore, deployment caveats for different demographics and medical settings were scarce. In conclusion, public documentation of authorized medical AI products in Europe lacks sufficient public transparency to inform about safety and risks. We call on lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to fulfill the promise of trustworthy AI for health.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    调查医疗专业人员对在日常业务中采用人工智能医疗技术的意见和态度。我们使用了混合方法。研究1采用了定性的计算基础理论方法,分析了r/medicine的几个子代码中的181个Reddit线程。通过利用无监督机器学习聚类方法,我们确定了三个关键主题:(1)人工智能的后果,(2)医师与人工智能的关系,和(3)提出的前进道路。特别是与前两个主题相关的Reddit帖子表明,医学专业人员对被AI取代的恐惧和对AI的怀疑在争论中发挥了重要作用。此外,结果表明,这种恐惧是由对人工智能的很少或适度的知识驱动的。与第三个主题相关的帖子集中于关于人工智能和医学必须如何被设计为在医疗保健中被广泛采用的事实讨论。研究2定量检查了对AI的恐惧之间的关系,关于AI的知识,和医疗专业人员打算更详细地使用人工智能技术。根据参加在线调查的223名医学专业人员的样本得出的结果表明,使用AI技术的意图随着对AI知识的增加而增加,并且这种影响由于担心被AI取代而受到缓和。
    To investigate opinions and attitudes of medical professionals towards adopting AI-enabled healthcare technologies in their daily business, we used a mixed-methods approach. Study 1 employed a qualitative computational grounded theory approach analyzing 181 Reddit threads in the several subreddits of r/medicine. By utilizing an unsupervised machine learning clustering method, we identified three key themes: (1) consequences of AI, (2) physician-AI relationship, and (3) a proposed way forward. In particular Reddit posts related to the first two themes indicated that the medical professionals\' fear of being replaced by AI and skepticism toward AI played a major role in the argumentations. Moreover, the results suggest that this fear is driven by little or moderate knowledge about AI. Posts related to the third theme focused on factual discussions about how AI and medicine have to be designed to become broadly adopted in health care. Study 2 quantitatively examined the relationship between the fear of AI, knowledge about AI, and medical professionals\' intention to use AI-enabled technologies in more detail. Results based on a sample of 223 medical professionals who participated in the online survey revealed that the intention to use AI technologies increases with increasing knowledge about AI and that this effect is moderated by the fear of being replaced by AI.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能(AI)在临床环境中的不断升级的整合对知情同意理论具有深远的意义。提出需要立即关注的挑战。中国,在推进医疗人工智能的部署方面,积极参与法律和道德法规的制定。本文以我国为例,进行了根植于医德原则和法律规范的理论考察,通过相关文献数据分析知情同意和医疗人工智能。研究表明,医疗AI对准确性构成了根本挑战,充分性,以及医生披露的信息的客观性,同时影响患者的能力和同意的意愿。为了在医疗人工智能的背景下加强对知情同意规则的遵守,本文主张向以患者为中心的信息披露标准转变,医疗责任规则的重组,加强专业培训,以及通过教育举措提高公众的理解。
    The escalating integration of Artificial Intelligence (AI) in clinical settings carries profound implications for the doctrine of informed consent, presenting challenges that necessitate immediate attention. China, in its advancement in the deployment of medical AI, is proactively engaging in the formulation of legal and ethical regulations. This paper takes China as an example to undertake a theoretical examination rooted in the principles of medical ethics and legal norms, analyzing informed consent and medical AI through relevant literature data. The study reveals that medical AI poses fundamental challenges to the accuracy, adequacy, and objectivity of information disclosed by doctors, alongside impacting patient competency and willingness to give consent. To enhance adherence to informed consent rules in the context of medical AI, this paper advocates for a shift towards a patient-centric information disclosure standard, the restructuring of medical liability rules, the augmentation of professional training, and the advancement of public understanding through educational initiatives.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    医疗AI改变了现代医学,为未来的医生创造了新的环境。然而,医学教育未能跟上这些进步的步伐,必须对当前医学本科生和研究生进行系统的医学AI教育。为了解决这个问题,我们的研究利用接受和使用技术模型的统一理论来确定影响接受和使用医学AI的关键因素。我们收集了来自13所大学和33所医院的1243名本科生和研究生的数据,54.3%的人报告了使用医疗人工智能的先前经验。我们的发现表明,医学研究生在使用医学AI方面的意识水平高于本科生。使用医疗人工智能的意图与预期表现、习惯,享乐动机,和信任。因此,未来的医学教育应优先提高学生在培训中的表现,课程应该设计成易于学习和引人入胜,确保学生具备必要的技能,在未来的医疗事业中取得成功。
    Medical AI has transformed modern medicine and created a new environment for future doctors. However, medical education has failed to keep pace with these advances, and it is essential to provide systematic education on medical AI to current medical undergraduate and postgraduate students. To address this issue, our study utilized the Unified Theory of Acceptance and Use of Technology model to identify key factors that influence the acceptance and intention to use medical AI. We collected data from 1,243 undergraduate and postgraduate students from 13 universities and 33 hospitals, and 54.3% reported prior experience using medical AI. Our findings indicated that medical postgraduate students have a higher level of awareness in using medical AI than undergraduate students. The intention to use medical AI is positively associated with factors such as performance expectancy, habit, hedonic motivation, and trust. Therefore, future medical education should prioritize promoting students\' performance in training, and courses should be designed to be both easy to learn and engaging, ensuring that students are equipped with the necessary skills to succeed in their future medical careers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    现代人工智能(AI)方法主要依赖于神经网络(NN)或深度神经网络方法。然而,这些方法需要大量的数据来训练,给定,它们的可训练参数的数量与它们的神经元计数具有多项式关系。此属性使深度NN在使用小型,尽管有代表性的数据集,如医疗保健。在本文中,我们提出了一种新颖的神经网络架构,它训练神经窝和轴突对的空间位置,其中权重是通过连接神经元的轴突-体距离计算的。我们将这种方法称为距离编码生物形态信息(DEBI)神经网络。与传统的神经网络相比,这个概念显著减少了可训练参数的数量。我们证明了DEBI模型可以在表格和成像数据集中产生可比的预测性能,与传统神经网络相比,它们需要一小部分可训练参数,导致高度可扩展的解决方案。
    Modern artificial intelligence (AI) approaches mainly rely on neural network (NN) or deep NN methodologies. However, these approaches require large amounts of data to train, given, that the number of their trainable parameters has a polynomial relationship to their neuron counts. This property renders deep NNs challenging to apply in fields operating with small, albeit representative datasets such as healthcare. In this paper, we propose a novel neural network architecture which trains spatial positions of neural soma and axon pairs, where weights are calculated by axon-soma distances of connected neurons. We refer to this method as distance-encoding biomorphic-informational (DEBI) neural network. This concept significantly minimizes the number of trainable parameters compared to conventional neural networks. We demonstrate that DEBI models can yield comparable predictive performance in tabular and imaging datasets, where they require a fraction of trainable parameters compared to conventional NNs, resulting in a highly scalable solution.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:近年来,人工智能(AI)技术得到了显着发展。医疗人工智能的公平性因其与人类生命和健康的直接关系而备受关注。这篇综述旨在从计算机科学的角度分析现有的关于医学人工智能公平性的研究文献,医学科学,和社会科学(包括法律和伦理学)。检讨的目的,是研究对公平的理解的异同,探索影响因素,并研究在英汉文献中实施医学人工智能公平性的潜在措施。
    方法:本研究采用了范围审查方法,并选择了以下数据库:WebofScience,MEDLINE,Pubmed,OVID,CNKI,万方数据,等。,到2023年2月,医疗人工智能的公平性问题。搜索是使用各种关键字进行的,例如“人工智能,\"\"机器学习,\"\"医学,\"\"算法,\"\"公平,\"\"决策,“和”偏见。“收集的数据被绘制出来,合成,并进行描述性和主题分析。
    结果:在审阅了468篇英文论文和356篇中文论文之后,53和42包括在最终分析中。我们的结果表明,三个不同的学科在核心问题的研究上都表现出显著的差异。除了算法偏差和人为偏差之外,数据是影响医疗AI公平性的基础。Legal,伦理,和技术措施都促进了医疗AI公平的实施。
    结论:我们的综述表明,关于数据公平性作为跨多学科视角实现医学AI公平性的基础的重要性,达成了共识。然而,在概念、影响因素,以及医疗人工智能公平性的实施措施。因此,未来的研究应该促进跨学科的讨论,以弥合不同领域之间的认知差距,并加强医疗人工智能中公平性的实际实施。
    Artificial Intelligence (AI) technology has been developed significantly in recent years. The fairness of medical AI is of great concern due to its direct relation to human life and health. This review aims to analyze the existing research literature on fairness in medical AI from the perspectives of computer science, medical science, and social science (including law and ethics). The objective of the review is to examine the similarities and differences in the understanding of fairness, explore influencing factors, and investigate potential measures to implement fairness in medical AI across English and Chinese literature.
    This study employed a scoping review methodology and selected the following databases: Web of Science, MEDLINE, Pubmed, OVID, CNKI, WANFANG Data, etc., for the fairness issues in medical AI through February 2023. The search was conducted using various keywords such as \"artificial intelligence,\" \"machine learning,\" \"medical,\" \"algorithm,\" \"fairness,\" \"decision-making,\" and \"bias.\" The collected data were charted, synthesized, and subjected to descriptive and thematic analysis.
    After reviewing 468 English papers and 356 Chinese papers, 53 and 42 were included in the final analysis. Our results show the three different disciplines all show significant differences in the research on the core issues. Data is the foundation that affects medical AI fairness in addition to algorithmic bias and human bias. Legal, ethical, and technological measures all promote the implementation of medical AI fairness.
    Our review indicates a consensus regarding the importance of data fairness as the foundation for achieving fairness in medical AI across multidisciplinary perspectives. However, there are substantial discrepancies in core aspects such as the concept, influencing factors, and implementation measures of fairness in medical AI. Consequently, future research should facilitate interdisciplinary discussions to bridge the cognitive gaps between different fields and enhance the practical implementation of fairness in medical AI.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    人工智能(AI)技术正在改变医学和医疗保健。学者和实践者争论的哲学,伦理,legal,以及医疗人工智能的监管影响,以及对利益相关者知识的实证研究,态度,实践已经开始出现。这项研究是对已发表的医学AI伦理学实证研究的系统回顾,目的是绘制主要方法,调查结果,以及奖学金的局限性,以告知未来的实践考虑。
    我们搜索了七个数据库,以获取有关医学AI伦理的同行评审实证研究,并根据所研究的技术类型对其进行了评估。地理位置,参与的利益相关者,使用的研究方法,研究伦理原则,和主要发现。
    纳入了36项研究(2013-2022年发布)。它们通常属于三个主题之一:利益相关者知识和对医学AI的态度的探索性研究,理论构建研究,测试有关利益相关者接受医疗人工智能的因素的假设,并研究识别和纠正医学人工智能中的偏见。
    由伦理学家制定的高级道德原则和准则与有关该主题的实证研究之间存在脱节,需要将伦理学家与AI开发人员一起嵌入,临床医生,病人,以及研究医学AI伦理的创新和技术采用的学者。
    UNASSIGNED: Artificial intelligence (AI) technologies are transforming medicine and healthcare. Scholars and practitioners have debated the philosophical, ethical, legal, and regulatory implications of medical AI, and empirical research on stakeholders\' knowledge, attitude, and practices has started to emerge. This study is a systematic review of published empirical studies of medical AI ethics with the goal of mapping the main approaches, findings, and limitations of scholarship to inform future practice considerations.
    UNASSIGNED: We searched seven databases for published peer-reviewed empirical studies on medical AI ethics and evaluated them in terms of types of technologies studied, geographic locations, stakeholders involved, research methods used, ethical principles studied, and major findings.
    UNASSIGNED: Thirty-six studies were included (published 2013-2022). They typically belonged to one of the three topics: exploratory studies of stakeholder knowledge and attitude toward medical AI, theory-building studies testing hypotheses regarding factors contributing to stakeholders\' acceptance of medical AI, and studies identifying and correcting bias in medical AI.
    UNASSIGNED: There is a disconnect between high-level ethical principles and guidelines developed by ethicists and empirical research on the topic and a need to embed ethicists in tandem with AI developers, clinicians, patients, and scholars of innovation and technology adoption in studying medical AI ethics.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    ChatGPT是人工智能(AI)模型的基础,为数字医疗保健开辟了新的机遇。特别是,它可以作为医生解释的副驾驶工具,总结,完成报告。此外,它可以建立在访问互联网上大量文献和知识的能力上。所以,chatGPT可以为医学检查产生可接受的响应。因此。它提供了增强医疗保健可及性的可能性,可扩展性,和有效性。尽管如此,chatGPT容易受到不准确的影响,虚假信息,和偏见。本文简要介绍了FoundationAI模型通过将ChatGPT作为示例工具来改变未来医疗保健的潜力。
    ChatGPT is a foundation Artificial Intelligence (AI) model that has opened up new opportunities in digital healthcare. Particularly, it can serve as a co-pilot tool for doctors in the interpretation, summarization, and completion of reports. Furthermore, it can build upon the ability to access the large literature and knowledge on the internet. So, chatGPT could generate acceptable responses for the medical examination. Hence. It offers the possibility of enhancing healthcare accessibility, expandability, and effectiveness. Nonetheless, chatGPT is vulnerable to inaccuracies, false information, and bias. This paper briefly describes the potential of Foundation AI models to transform future healthcare by presenting ChatGPT as an example tool.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号