关键词: Electronic Health Records Healthcare Large Language Models Medical Imaging Natural Language Processing Transformers

来  源:   DOI:10.1016/j.artmed.2024.102900

Abstract:
With Artificial Intelligence (AI) increasingly permeating various aspects of society, including healthcare, the adoption of the Transformers neural network architecture is rapidly changing many applications. Transformer is a type of deep learning architecture initially developed to solve general-purpose Natural Language Processing (NLP) tasks and has subsequently been adapted in many fields, including healthcare. In this survey paper, we provide an overview of how this architecture has been adopted to analyze various forms of healthcare data, including clinical NLP, medical imaging, structured Electronic Health Records (EHR), social media, bio-physiological signals, biomolecular sequences. Furthermore, which have also include the articles that used the transformer architecture for generating surgical instructions and predicting adverse outcomes after surgeries under the umbrella of critical care. Under diverse settings, these models have been used for clinical diagnosis, report generation, data reconstruction, and drug/protein synthesis. Finally, we also discuss the benefits and limitations of using transformers in healthcare and examine issues such as computational cost, model interpretability, fairness, alignment with human values, ethical implications, and environmental impact.
摘要:
随着人工智能(AI)日益渗透到社会的各个方面,包括医疗保健,变压器神经网络架构的采用正在迅速改变许多应用。Transformer是一种深度学习架构,最初是为解决通用自然语言处理(NLP)任务而开发的,后来在许多领域得到了应用。包括医疗保健。在这份调查报告中,我们概述了如何采用这种架构来分析各种形式的医疗保健数据,包括临床NLP,医学成像,结构化电子健康记录(EHR),社交媒体,生物生理信号,生物分子序列。此外,其中还包括在重症监护的保护伞下使用变压器架构生成手术指导和预测手术后不良结果的文章。在不同的环境下,这些模型已用于临床诊断,报告生成,数据重建,和药物/蛋白质合成。最后,我们还讨论了在医疗保健中使用变压器的好处和局限性,并研究了计算成本等问题,模型可解释性,公平,与人类价值观保持一致,伦理含义,和环境影响。
公众号