Deep learning (DL)

深度学习 (DL)
  • 文章类型: Journal Article
    先进的生物信息学分析,如系统生物学(SysBio)和人工智能(AI)方法,包括机器学习(ML)和深度学习(DL),越来越多地出现在干细胞(SC)研究中。关于这些事态发展及其全球影响的大致时间表仍然缺乏。我们根据2000年至2024年在PubMed上发表的文献,对SysBio和AI分析对SC研究和治疗开发的贡献进行了范围审查。我们发现,在2000年至2021年间,与所有三个搜索词相关的研究产出增加了8-10倍,自2010年以来,与人工智能相关的产量增加了10倍。自2010年以来,SysBio和AI的使用仍然在临床前基础研究中占主导地位,并且越来越多地用于临床导向的转化医学。与SysBio和AI相关的研究遍布全球,以美国为首的SysBio产量(美国,n=1487),英国(UK,n=1094),德国(n=355),荷兰(n=339),俄罗斯(n=215)法国(n=149)在人工智能相关研究中,美国(n=853)和英国(n=258)处于领先地位,其次是瑞士(n=69),荷兰(n=37)德国(n=19)。美国和英国在与AI/ML和AI/DL相关的SC出版物中最为活跃。SysBio在ESC研究中的突出使用最近被iPSC和MSC研究中AI的突出使用所取代。这项研究揭示了人工智能之间的全球演变和日益增长的交集,SysBio,和SC过去二十年的研究,在过去的十年里,这三个领域都有了大幅增长,人工智能相关研究也呈指数级增长。
    Advanced bioinformatics analysis, such as systems biology (SysBio) and artificial intelligence (AI) approaches, including machine learning (ML) and deep learning (DL), is increasingly present in stem cell (SC) research. An approximate timeline on these developments and their global impact is still lacking. We conducted a scoping review on the contribution of SysBio and AI analysis to SC research and therapy development based on literature published in PubMed between 2000 and 2024. We identified an 8-10-fold increase in research output related to all three search terms between 2000 and 2021, with a 10-fold increase in AI-related production since 2010. Use of SysBio and AI still predominates in preclinical basic research with increasing use in clinically oriented translational medicine since 2010. SysBio- and AI-related research was found all over the globe, with SysBio output led by the United States (US, n=1487), United Kingdom (UK, n=1094), Germany (n=355), The Netherlands (n=339), Russia (n=215), and France (n=149), while for AI-related research the US (n=853) and UK (n=258) take a strong lead, followed by Switzerland (n=69), The Netherlands (n=37), and Germany (n=19). The US and UK are most active in SCs publications related to AI/ML and AI/DL. The prominent use of SysBio in ESC research was recently overtaken by prominent use of AI in iPSC and MSC research. This study reveals the global evolution and growing intersection between AI, SysBio, and SC research over the past two decades, with substantial growth in all three fields and exponential increases in AI-related research in the past decade.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    作为神经衰老的重要生物标志物,大脑年龄反映了人类大脑的完整性和健康。准确预测大脑年龄有助于理解神经衰老的潜在机制。在这项研究中,利用T1加权磁共振成像(MRI)数据,提出了一种具有staking策略的交叉分层集成学习算法,以获得脑年龄和推导的预测年龄差异(PAD).该方法的特点是实现两个模块:一个是3D-DenseNet的三个基础学习者,3D-ResNeXt,3D-Inception-v4;另一个是线性回归的14个二级学习者。为了评估性能,我们的方法与单基础学习者进行了比较,常规集成学习算法,和最先进的(SOTA)方法。结果表明,我们提出的模型优于其他模型,具有三个平均绝对误差(MAE)指标,均方根误差(RMSE),和2.9405年的决定系数(R2),3.9458年,和0.9597。此外,正常对照组(NC)三组间PAD存在显著差异,轻度认知障碍(MCI)和阿尔茨海默病(AD),随着整个NC的增长趋势,MCI和AD。结果表明,该算法可以有效地用于计算大脑老化和PAD,并提供早期诊断和评估正常脑老化和AD的潜力。
    As an important biomarker of neural aging, the brain age reflects the integrity and health of the human brain. Accurate prediction of brain age could help to understand the underlying mechanism of neural aging. In this study, a cross-stratified ensemble learning algorithm with staking strategy was proposed to obtain brain age and the derived predicted age difference (PAD) using T1-weighted magnetic resonance imaging (MRI) data. The approach was characterized as by implementing two modules: one was three base learners of 3D-DenseNet, 3D-ResNeXt, 3D-Inception-v4; another was 14 secondary learners of liner regressions. To evaluate performance, our method was compared with single base learners, regular ensemble learning algorithms, and state-of-the-art (SOTA) methods. The results demonstrated that our proposed model outperformed others models, with three metrics of mean absolute error (MAE), root mean-squared error (RMSE), and coefficient of determination (R2) of 2.9405 years, 3.9458 years, and 0.9597, respectively. Furthermore, there existed significant differences in PAD among the three groups of normal control (NC), mild cognitive impairment (MCI) and Alzheimer\'s disease (AD), with an increased trend across NC, MCI, and AD. It was concluded that the proposed algorithm could be effectively used in computing brain aging and PAD, and offering potential for early diagnosis and assessment of normal brain aging and AD.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    圆形叶斑病(CLS)对柿子栽培构成了重大威胁,导致收成大幅减少。现有的视觉和破坏性检查方法存在主观性,精度有限,和相当多的时间消耗。本研究通过与光学相干断层扫描(OCT)集成的基于深度学习(DL)的管道,提出了一种疾病的自动预识别方法,从而解决了现有方法的突出问题。通过采用预训练的DL模型进行迁移学习,研究取得了有希望的结果,特别是DenseNet-121和VGG-16。DenseNet-121模型擅长区分CLS疾病的三个阶段(健康(H),明显健康(或健康感染(HI)),和感染(I))。该模型对于H类实现了0.7823的精度值,HI类0.9005,I类和0.7027,HI类召回值为0.8953,I类召回值为0.8387。此外,利用VGG-16的补充质量检查模型增强了CLS检测的性能,该模型在区分低细节和高细节图像方面获得了98.99%的准确率。此外,这项研究采用了LAMP和A扫描的组合进行数据集标记过程,显著提高了模型的准确性。总的来说,这项研究强调了DL技术与OCT集成在农业环境中增强疾病识别过程的潜力,特别是在柿子种植中,通过提供有效和客观的CLS预识别,并实现早期干预和管理策略。
    Circular leaf spot (CLS) disease poses a significant threat to persimmon cultivation, leading to substantial harvest reductions. Existing visual and destructive inspection methods suffer from subjectivity, limited accuracy, and considerable time consumption. This study presents an automated pre-identification method of the disease through a deep learning (DL) based pipeline integrated with optical coherence tomography (OCT), thereby addressing the highlighted issues with the existing methods. The investigation yielded promising outcomes by employing transfer learning with pre-trained DL models, specifically DenseNet-121 and VGG-16. The DenseNet-121 model excels in differentiating among three stages of CLS disease (healthy (H), apparently healthy (or healthy-infected (HI)), and infected (I)). The model achieved precision values of 0.7823 for class-H, 0.9005 for class-HI, and 0.7027 for class-I, supported by recall values of 0.8953 for class-HI and 0.8387 for class-I. Moreover, the performance of CLS detection was enhanced by a supplemental quality inspection model utilizing VGG-16, which attained an accuracy of 98.99% in discriminating between low-detail and high-detail images. Moreover, this study employed a combination of LAMP and A-scan for the dataset labeling process, significantly enhancing the accuracy of the models. Overall, this study underscores the potential of DL techniques integrated with OCT to enhance disease identification processes in agricultural settings, particularly in persimmon cultivation, by offering efficient and objective pre-identification of CLS and enabling early intervention and management strategies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    心血管疾病是世界上主要的死亡原因,心血管成像技术是无创诊断的主要手段。主动脉瓣狭窄是一种致命的心脏病,几年来主动脉瓣钙化。使用深度学习(DL)算法开发的数据驱动工具可以处理和分类医学图像数据,提供可靠的快速诊断,提高医疗效率。对DL在医学图像中用于病理钙检测的应用进行了系统的回顾,得出的结论是,该领域已经建立了技术,主要使用CT扫描,以辐射暴露为代价。超声心动图是一种未经探索的检测钙的替代方法,但仍然需要技术发展。在这篇文章中,开发了一种基于卷积神经网络(CNN)的全自动方法来检测超声心动图图像中的主动脉钙化,由两个基本过程组成:(1)用于定位主动脉瓣的对象检测器-达到95%的精度和100%的召回率;(2)用于识别瓣膜中钙结构的分类器-达到92%的精度和100%的召回率。这项工作的结果是主动脉瓣钙化的超声心动图检测自动化的可能性,一种致命和流行的疾病。
    Cardiovascular diseases are the main cause of death in the world and cardiovascular imaging techniques are the mainstay of noninvasive diagnosis. Aortic stenosis is a lethal cardiac disease preceded by aortic valve calcification for several years. Data-driven tools developed with Deep Learning (DL) algorithms can process and categorize medical images data, providing fast diagnoses with considered reliability, to improve healthcare effectiveness. A systematic review of DL applications on medical images for pathologic calcium detection concluded that there are established techniques in this field, using primarily CT scans, at the expense of radiation exposure. Echocardiography is an unexplored alternative to detect calcium, but still needs technological developments. In this article, a fully automated method based on Convolutional Neural Networks (CNNs) was developed to detect Aortic Calcification in Echocardiography images, consisting of two essential processes: (1) an object detector to locate aortic valve - achieving 95% of precision and 100% of recall; and (2) a classifier to identify calcium structures in the valve - which achieved 92% of precision and 100% of recall. The outcome of this work is the possibility of automation of the detection with Echocardiography of Aortic Valve Calcification, a lethal and prevalent disease.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    疟疾仍然是一个重大的全球健康挑战,因为疟原虫寄生虫的耐药性不断增加,并且无法阻断人类宿主内的传播。虽然机器学习(ML)和深度学习(DL)方法在加速抗疟药物发现方面显示出了希望。基于分子图和其他共同表示方法的深度学习模型的性能值得进一步探索。目前的研究忽视了具有不同程度敏感性或抗性的疟原虫突变株,并且没有涵盖三个主要生命周期阶段(肝脏,无性系的血液,和配子细胞)在人类宿主内,这对治疗和传播阻断都至关重要。在这项研究中,我们手动整理了一个基准抗疟活性数据集,该数据集包含10种疟原虫表型和3个阶段的407,404种独特化合物和410,654种生物活性数据点.系统比较了两种基于指纹的ML模型(RF::Morgan和XGBoost:Morgan)的性能,四个基于图的DL模型(GCN,GAT,MPNN,和细心的FP),和三个共同表示DL模型(FP-GNN,HiGNN,和FG-BERT),结果表明:1)FP-GNN模型取得了最好的预测性能,在区分平衡的活性和非活性化合物方面优于其他方法,更积极,和更多负面数据集,总体AUROC为0.900;2)基于指纹的ML模型在大型数据集(>1000种化合物)上优于基于图形的DL模型,但是三个共同表示的DL模型能够结合特定领域的化学知识来弥合这一差距,实现更好的预测性能。这些发现为选择合适的ML和DL方法进行抗疟活性预测任务提供了有价值的指导。FP-GNN模型的可解释性分析揭示了其能够准确捕获负责已知抗疟药atovaquone的肝脏和血液阶段活动的关键结构特征。最后,我们开发了一个网络服务器,MalariaFlow,结合这些高质量的抗疟疾活性预测模型,虚拟筛选,和相似性搜索,成功预测通过实验测试验证的新型三级抗疟药命中,证明其在发现潜在的多阶段抗疟药候选药物方面的有效性和价值。
    Malaria remains a significant global health challenge due to the growing drug resistance of Plasmodium parasites and the failure to block transmission within human host. While machine learning (ML) and deep learning (DL) methods have shown promise in accelerating antimalarial drug discovery, the performance of deep learning models based on molecular graph and other co-representation approaches warrants further exploration. Current research has overlooked mutant strains of the malaria parasite with varying degrees of sensitivity or resistance, and has not covered the prediction of inhibitory activities across the three major life cycle stages (liver, asexual blood, and gametocyte) within the human host, which is crucial for both treatment and transmission blocking. In this study, we manually curated a benchmark antimalarial activity dataset comprising 407,404 unique compounds and 410,654 bioactivity data points across ten Plasmodium phenotypes and three stages. The performance was systematically compared among two fingerprint-based ML models (RF::Morgan and XGBoost:Morgan), four graph-based DL models (GCN, GAT, MPNN, and Attentive FP), and three co-representations DL models (FP-GNN, HiGNN, and FG-BERT), which reveal that: 1) The FP-GNN model achieved the best predictive performance, outperforming the other methods in distinguishing active and inactive compounds across balanced, more positive, and more negative datasets, with an overall AUROC of 0.900; 2) Fingerprint-based ML models outperformed graph-based DL models on large datasets (>1000 compounds), but the three co-representations DL models were able to incorporate domain-specific chemical knowledge to bridge this gap, achieving better predictive performance. These findings provide valuable guidance for selecting appropriate ML and DL methods for antimalarial activity prediction tasks. The interpretability analysis of the FP-GNN model revealed its ability to accurately capture the key structural features responsible for the liver- and blood-stage activities of the known antimalarial drug atovaquone. Finally, we developed a web server, MalariaFlow, incorporating these high-quality models for antimalarial activity prediction, virtual screening, and similarity search, successfully predicting novel triple-stage antimalarial hits validated through experimental testing, demonstrating its effectiveness and value in discovering potential multistage antimalarial drug candidates.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    车辆通信是现代交通系统最重要的方面之一,因为它使车辆和基础设施之间的实时数据传输能够改善交通流量和道路安全。下一代移动技术,5G,是为了解决前几代对高数据速率和服务质量问题日益增长的需求而创建的。5G蜂窝技术旨在通过隔离外部和内部设置并允许极高的传输速度来消除穿透损耗,通过使用分布式天线系统(DAS)安装数百个分散的天线阵列来实现。巨大的多输入多输出(MIMO)系统通过DAS和巨大的MIMO系统来实现,在那里建立了数百个分散的天线阵列。因为深度学习(DL)技术采用具有至少一个隐藏层的人工神经网络,它们在这项研究中用于车辆识别。他们可以快速处理大量标记的训练数据以识别特征。因此,本文采用VGG19DL模型通过迁移学习来解决车辆检测和障碍物识别的任务。提出了一种基于信道特性的水平切换预测方法。所建议的技术被设计用于使用DL的异构网络或水平切换。在5G环境的指定周边地区,建议的检测和切换算法以97%的成功率识别车辆,并预测下一个切换站点。
    Vehicle communication is one of the most vital aspects of modern transportation systems because it enables real-time data transmission between vehicles and infrastructure to improve traffic flow and road safety. The next generation of mobile technology, 5G, was created to address earlier generations\' growing need for high data rates and quality of service issues. 5G cellular technology aims to eliminate penetration loss by segregating outside and inside settings and allowing extremely high transmission speeds, achieved by installing hundreds of dispersed antenna arrays using a distributed antenna system (DAS). Huge multiple-input multiple-output (MIMO) systems are accomplished via DASs and huge MIMO systems, where hundreds of dispersed antenna arrays are built. Because deep learning (DL) techniques employ artificial neural networks with at least one hidden layer, they are used in this study for vehicle recognition. They can swiftly process vast quantities of labeled training data to identify features. Therefore, this paper employed the VGG19 DL model through transfer learning to address the task of vehicle detection and obstacle identification. It also proposes a novel horizontal handover prediction method based on channel characteristics. The suggested techniques are designed for heterogeneous networks or horizontal handovers using DL. In the designated surrounding regions of 5G environments, the suggested detection and handover algorithms identified vehicles with a success rate of 97 % and predicted the next station for handover.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在云计算(CC)中,任务调度将任务分配到最适合执行的资源。本文提出了一种利用多目标优化和深度学习(DL)模型进行任务调度的模型。最初,多目标任务调度是由传入用户利用提出的混合分数火焰甲虫优化(FFBO),它是由集成粪甲虫优化(DBO)形成的,火烈鸟搜索算法(FSA)和分数阶微积分(FC)。这里,适应度函数取决于可靠性,成本,预测能量,和完工时间,预测能量由深度残差网络(DRN)预测。此后,采用融合长短期记忆的深度前馈神经网络(DFNN-LSTM)实现基于DL的任务调度,它是DFNN和LSTM的组合。此外,在调度工作流时,将考虑任务参数和虚拟机(VM)实时参数。任务参数是最早完成时间(EFT),最早开始时间(EST),任务长度,任务优先级,和实际任务运行时间,而VM参数包括内存利用率,带宽利用率,容量,和中央处理器(CPU)。所提出的模型DFNN-LSTM+FFBO取得了优越的制造时间,能源,和资源利用率为0.188,0.950J,和0.238。
    In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine\'s (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:脑电图(EEG)已发展成为评估各个领域认知工作量的不可或缺的工具。已越来越多地采用ML和DL技术来开发基于EEG数据的准确工作负荷估计和分类模型。本系统综述的目的是编制有关使用ML和DL方法进行EEG工作量估计和分类的研究。
    方法:在进行审查时遵循了PRISMA程序,搜索是通过SpringerLink的数据库进行的,ACM数字图书馆,IEEE探索,pubmed,和科学直接从2024年2月16日开始到结束。根据预定义的纳入标准选择研究。提取数据以捕获研究设计,参与者人口统计,脑电图特征,ML/DL算法,和报告的性能指标。
    结果:在出现的125个项目中,对33篇科学论文进行了全面评估。研究设计,参与者人口统计,调查中使用的EEG工作量测量和分类技术有所不同。SVM,CNN,和混合网络是经常使用的ML和DL方法的示例。分析不同ML/DL模型获得的准确性分数。此外,注意到样本频率和模型精度之间的关系,与较高的采样频率通常导致改进的性能。ML/DL方法的百分比分布表明,SVM,CNN,RNN是最常用的技术,反映了他们在处理脑电图数据时的鲁棒性。
    结论:综合综述强调了如何利用脑电图数据将ML用于识别不同学科的精神负荷。优化实际应用需要多模态数据集成,标准化工作,和真实世界的验证研究。这些系统还将通过解决道德问题和研究新的EEG特性来进一步改进。这将改善人机交互和绩效评估。
    OBJECTIVE: Electroencephalography (EEG) has evolved into an indispensable instrument for estimating cognitive workload in various domains. ML and DL techniques have been increasingly employed to develop accurate workload estimation and classification models based on EEG data. The goal of this systematic review is to compile the body of research on EEG workload estimation and classification using ML and DL approaches.
    METHODS: The PRISMA procedures were followed in conducting the review, searches were conducted through databases at SpringerLink, ACM Digital Library, IEEE Explore, PUBMED, and Science Direct from the beginning to the end of February 16, 2024. Studies were selected based on predefined inclusion criteria. Data were extracted to capture study design, participant demographics, EEG features, ML/DL algorithms, and reported performance metrics.
    RESULTS: Out of the 125 items that emerged, 33 scientific papers were fully evaluated. The study designs, participant demographics, and EEG workload measurement and categorization techniques used in the investigations differed. SVM, CNN, and hybrid networks are examples of ML and DL approaches that were often used. Analyzing the accuracy scores achieved by different ML/DL models. Furthermore, a relationship was noted between sample frequency and model accuracy, with higher sample frequencies generally leading to improved performance. The percentage distribution of ML/DL methods revealed that SVMs, CNNs, and RNNs were the most commonly utilized techniques, reflecting their robustness in handling EEG data.
    CONCLUSIONS: The comprehensive review emphasizes how ML may be used to identify mental workload across a variety of disciplines using EEG data. Optimizing practical applications requires multimodal data integration, standardization efforts, and real-world validation studies. These systems will also be further improved by addressing ethical issues and investigating new EEG properties, which will improve human-computer interaction and performance assessment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    大多数原发性骨肿瘤通常在膝关节周围的骨骼中发现。然而,对于没有经验的或初级的放射科医生来说,在X光片上检测原发性骨肿瘤可能是一项挑战。这项研究旨在开发一种深度学习(DL)模型,用于在X射线照片上检测膝关节周围的原发性骨肿瘤。
    来自四个三级转诊中心,我们招募了687例诊断为骨肿瘤(包括骨肉瘤,软骨肉瘤,骨巨细胞瘤,骨囊肿,内生软骨瘤,纤维发育不良,等。417名男性,270名女性;平均年龄22.8±13.2岁),根据术后病理或临床影像学/随访,和1,988名具有正常骨骼X光片的参与者(1,152名男性,836名女性;平均年龄27.9±12.2岁)。数据集被分成一个训练集,用于模型开发,用于模型验证的内部独立测试集和外部测试集。经过训练的模型定位骨肿瘤病变,然后检测肿瘤患者。接收器工作特性曲线和Cohen的kappa系数用于评估检测性能。我们使用置换测试将模型的检测性能与内部测试集中的两名初级放射科医生的检测性能进行了比较。
    DL模型在内部和外部测试集中的X射线照片上正确定位了94.5%和92.9%的骨肿瘤,分别。对于内部和外部测试集,在骨肿瘤患者的DL检测中,准确度为0.964/0.920,接受者工作特征曲线下面积(AUC)为0.981/0.990,分别。内部测试集中模型的Cohen\的kappa系数显着高于具有4年和3年肌肉骨骼放射学经验的两名初级放射科医师的kappa系数(模型与读者A,0.927vs.0.777,P<0.001;模型与读者B,0.927vs.0.841,P=0.033)。
    DL模型在检测膝关节周围的原发性骨肿瘤方面取得了良好的性能。该模型比初级放射科医生的性能更好,表明在X光片上检测骨肿瘤的可能性。
    UNASSIGNED: Most primary bone tumors are often found in the bone around the knee joint. However, the detection of primary bone tumors on radiographs can be challenging for the inexperienced or junior radiologist. This study aimed to develop a deep learning (DL) model for the detection of primary bone tumors around the knee joint on radiographs.
    UNASSIGNED: From four tertiary referral centers, we recruited 687 patients diagnosed with bone tumors (including osteosarcoma, chondrosarcoma, giant cell tumor of bone, bone cyst, enchondroma, fibrous dysplasia, etc.; 417 males, 270 females; mean age 22.8±13.2 years) by postoperative pathology or clinical imaging/follow-up, and 1,988 participants with normal bone radiographs (1,152 males, 836 females; mean age 27.9±12.2 years). The dataset was split into a training set for model development, an internal independent and an external test set for model validation. The trained model located bone tumor lesions and then detected tumor patients. Receiver operating characteristic curves and Cohen\'s kappa coefficient were used for evaluating detection performance. We compared the model\'s detection performance with that of two junior radiologists in the internal test set using permutation tests.
    UNASSIGNED: The DL model correctly localized 94.5% and 92.9% bone tumors on radiographs in the internal and external test set, respectively. An accuracy of 0.964/0.920, and an area under the receiver operating characteristic curve (AUC) of 0.981/0.990 in DL detection of bone tumor patients were for the internal and external test set, respectively. Cohen\'s kappa coefficient of the model in the internal test set was significantly higher than that of the two junior radiologists with 4 and 3 years of experience in musculoskeletal radiology (Model vs. Reader A, 0.927 vs. 0.777, P<0.001; Model vs. Reader B, 0.927 vs. 0.841, P=0.033).
    UNASSIGNED: The DL model achieved good performance in detecting primary bone tumors around the knee joint. This model had better performance than those of junior radiologists, indicating the potential for the detection of bone tumors on radiographs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    腰椎疾病是下腰痛(LBP)的常见原因之一。对腰椎的解剖参数进行客观可靠的测量对于腰椎疾病的临床诊断和评估至关重要。然而,手动测量费时费力,一致性和可重复性差。这里,我们旨在开发和评估一种基于腰椎侧位X线片和深度学习(DL)的自动测量模型,用于测量椎体和椎间盘的解剖参数。
    开发了基于DL的模型,该模型具有由1,318个腰椎侧位X射线照片组成的数据集,用于预测解剖参数,包括椎体高度(VBH),椎间盘高度(IDH),和椎间盘角度(IDA)。由3名放射科医师获得的值的平均值用作参考标准。以标准差(SD)进行统计分析,平均绝对误差(MAE),正确关键点的百分比(PCK),类内相关系数(ICC),回归分析,和Bland-Altman图评估模型与参考标准的性能。
    观察者内界标距离在3mm阈值内的百分比为96%。3毫米阈值内的观察者间界标距离百分比为94%(R1和R2),92%(R1和R3),和93%(R2和R3),分别。模型在3mm距离阈值内的PCK为94-99%。模型预测值为30.22±3.01mm,10.40±3.91mm,VBH为10.63°±4.74°,IDH,IDA,分别。在大多数情况下,模型与参考标准之间的腰椎体和椎间盘解剖参数具有良好的相关性和一致性(R2=0.89-0.95,ICC=0.93-0.98,MAE=0.61-1.15,SD=0.89-1.64)。
    新提出的基于DL算法的模型可以准确地测量腰椎侧位射线照片上的各种解剖参数。这可以为脊柱疾病的定量评估提供准确有效的测量工具。
    UNASSIGNED: Lumbar spine disorders are one of the common causes of low back pain (LBP). Objective and reliable measurement of anatomical parameters of the lumbar spine is essential in the clinical diagnosis and evaluation of lumbar disorders. However, manual measurements are time-consuming and laborious, with poor consistency and repeatability. Here, we aim to develop and evaluate an automatic measurement model for measuring the anatomical parameters of the vertebral body and intervertebral disc based on lateral lumbar radiographs and deep learning (DL).
    UNASSIGNED: A model based on DL was developed with a dataset consisting of 1,318 lateral lumbar radiographs for the prediction of anatomical parameters, including vertebral body heights (VBH), intervertebral disc heights (IDH), and intervertebral disc angles (IDA). The mean of the values obtained by 3 radiologists was used as a reference standard. Statistical analysis was performed in terms of standard deviation (SD), mean absolute error (MAE), Percentage of correct keypoints (PCK), intraclass correlation coefficient (ICC), regression analysis, and Bland-Altman plot to evaluate the performance of the model compared with the reference standard.
    UNASSIGNED: The percentage of intra-observer landmark distance within the 3 mm threshold was 96%. The percentage of inter-observer landmark distance within the 3 mm threshold was 94% (R1 and R2), 92% (R1 and R3), and 93% (R2 and R3), respectively. The PCK of the model within the 3 mm distance threshold was 94-99%. The model-predicted values were 30.22±3.01 mm, 10.40±3.91 mm, and 10.63°±4.74° for VBH, IDH, and IDA, respectively. There were good correlation and consistency in anatomical parameters of the lumbar vertebral body and disc between the model and the reference standard in most cases (R2=0.89-0.95, ICC =0.93-0.98, MAE =0.61-1.15, and SD =0.89-1.64).
    UNASSIGNED: The newly proposed model based on a DL algorithm can accurately measure various anatomical parameters on lateral lumbar radiographs. This could provide an accurate and efficient measurement tool for the quantitative evaluation of spinal disorders.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号