Convolutional neural network

卷积神经网络
  • 文章类型: Journal Article
    在经食管超声心动图(TEE)上从经胃短轴视图(TSV)分割左心室是围手术期心血管评估的基础。即使是经验丰富的专业人士,该过程仍然耗时且依赖于经验。当前的研究旨在通过评估不同U-Net算法的有效性来评估深度学习用于自动分割的可行性。回顾性收集了一个包含1388例TSV采集的大型数据集,该数据集来自451例患者(32%的女性,平均年龄53.42岁),在2015年7月至2023年10月期间接受围手术期TEE。通过图像预处理和数据增强,训练集中包含3336张图像,验证集中的138个图像,和测试集中的138个图像。四个深度神经网络(U-Net,注意U-Net,UNet++,和UNeXt)用于左心室分割,并根据测试集上的Jaccard相似系数(JSC)和Dice相似系数(DSC)进行比较,以及网络参数的数量,培训时间,和推理时间。注意U-Net和U-Net++模型在JSC(最高平均JSC:86.02%)和DSC(最高平均DSC:92.00%)方面表现更好,UNeXt模型的网络参数最小(147万),U-Net模型的训练时间(6428.65s)和推断时间(101.75ms)最少。注意力U-Net模型在挑战性案例中优于其他三个模型,包括左心室边界受损和乳头状肌伪影。这一开创性的探索证明了深度学习在TEE上从TSV分割左心室的可行性,这将促进心血管评估的加速和客观替代围手术期管理。
    Segmenting the left ventricle from the transgastric short-axis views (TSVs) on transesophageal echocardiography (TEE) is the cornerstone for cardiovascular assessment during perioperative management. Even for seasoned professionals, the procedure remains time-consuming and experience-dependent. The current study aims to evaluate the feasibility of deep learning for automatic segmentation by assessing the validity of different U-Net algorithms. A large dataset containing 1388 TSV acquisitions was retrospectively collected from 451 patients (32% women, average age 53.42 years) who underwent perioperative TEE between July 2015 and October 2023. With image preprocessing and data augmentation, 3336 images were included in the training set, 138 images in the validation set, and 138 images in the test set. Four deep neural networks (U-Net, Attention U-Net, UNet++, and UNeXt) were employed for left ventricle segmentation and compared in terms of the Jaccard similarity coefficient (JSC) and Dice similarity coefficient (DSC) on the test set, as well as the number of network parameters, training time, and inference time. The Attention U-Net and U-Net++ models performed better in terms of JSC (the highest average JSC: 86.02%) and DSC (the highest average DSC: 92.00%), the UNeXt model had the smallest network parameters (1.47 million), and the U-Net model had the least training time (6428.65 s) and inference time for a single image (101.75 ms). The Attention U-Net model outperformed the other three models in challenging cases, including the impaired boundary of left ventricle and the artifact of the papillary muscle. This pioneering exploration demonstrated the feasibility of deep learning for the segmentation of the left ventricle from TSV on TEE, which will facilitate an accelerated and objective alternative of cardiovascular assessment for perioperative management.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    现代神经网络对随机噪声和蓄意攻击的脆弱性引起了人们对其鲁棒性的担忧,特别是它们越来越多地用于安全和安全关键应用。尽管最近的研究努力通过对抗性示例的重新训练或采用数据增强技术来增强鲁棒性,对训练数据扰动对模型鲁棒性的影响的全面研究仍然缺乏。本文提出了第一个广泛的实证研究,调查了模型再训练过程中数据扰动的影响。实验分析侧重于随机和对抗鲁棒性,遵循稳健性分析领域的既定做法。探索了数据集不同方面的各种类型的扰动,包括输入,标签,和抽样分布。进行单因素和多因素实验来评估个体扰动及其组合。这些发现为构建高质量的训练数据集提供了见解,以优化鲁棒性,并推荐适当程度的训练集扰动,以平衡鲁棒性和正确性,并有助于理解深度学习中的模型鲁棒性,并为通过扰动再训练增强模型性能提供实践指导,促进为安全关键型应用开发更可靠、更值得信赖的深度学习系统。
    The vulnerability of modern neural networks to random noise and deliberate attacks has raised concerns about their robustness, particularly as they are increasingly utilized in safety- and security-critical applications. Although recent research efforts were made to enhance robustness through retraining with adversarial examples or employing data augmentation techniques, a comprehensive investigation into the effects of training data perturbations on model robustness remains lacking. This paper presents the first extensive empirical study investigating the influence of data perturbations during model retraining. The experimental analysis focuses on both random and adversarial robustness, following established practices in the field of robustness analysis. Various types of perturbations in different aspects of the dataset are explored, including input, label, and sampling distribution. Single-factor and multi-factor experiments are conducted to assess individual perturbations and their combinations. The findings provide insights into constructing high-quality training datasets for optimizing robustness and recommend the appropriate degree of training set perturbations that balance robustness and correctness, and contribute to understanding model robustness in deep learning and offer practical guidance for enhancing model performance through perturbed retraining, promoting the development of more reliable and trustworthy deep learning systems for safety-critical applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:在儿科医学中,准确估计骨龄对于骨骼成熟度评估至关重要,生长障碍诊断,和治疗干预计划。确定骨龄的常规技术取决于放射科医师的主观判断,这可能导致估计骨龄的不可忽视的差异。这项研究提出了一种基于深度学习的模型,该模型利用完全连接的卷积神经网络(CNN)来预测左侧X射线照片的骨龄。
    方法:本研究中使用的数据集,由473名患者组成,从单个机构的PACS(图片实现和通信系统)中进行回顾性检索。我们开发了一个由四个卷积块组成的完全连接的CNN,三个完全连接的层,和一个神经元作为输出。使用均方误差作为成本函数,在80%的数据上对模型进行了训练和验证,以通过Adam优化算法最小化预测和参考骨龄值之间的差异。将数据增强应用于在加倍数据样本中产生的训练和验证集。使用各种指标对测试数据集(20%)评估训练模型的性能,包括平均绝对误差(MAE),中位数绝对误差(MedAE),均方根误差(RMSE),和平均绝对百分比误差(MAPE)。本研究中用于预测骨龄的开发模型的代码可在GitHub上公开获得,网址为https://github.com/afiosman/基于深度学习的骨龄估计。
    结果:实验结果证明了我们的模型在左侧射线照片上预测骨龄的良好能力,与大多数情况一样,预测骨龄和参考骨龄几乎彼此接近,计算的MAE为2.3[1.9,2.7;0.95置信水平]年,MedAE2.1年,RMAE为3.0[1.5、4.5;0.95置信水平]年,测试数据集上的MAPE为0.29(29%)。
    结论:这些发现突出了从左侧X射线照片估计骨龄的可用性,帮助放射科医生验证自己的结果,考虑到模型的误差幅度。我们提出的模型的性能可以通过额外的改进和验证来提高。
    OBJECTIVE: In pediatric medicine, precise estimation of bone age is essential for skeletal maturity evaluation, growth disorder diagnosis, and therapeutic intervention planning. Conventional techniques for determining bone age depend on radiologists\' subjective judgments, which may lead to non-negligible differences in the estimated bone age. This study proposes a deep learning-based model utilizing a fully connected convolutional neural network(CNN) to predict bone age from left-hand radiographs.
    METHODS: The data set used in this study, consisting of 473 patients, was retrospectively retrieved from the PACS (Picture Achieving and Communication System) of a single institution. We developed a fully connected CNN consisting of four convolutional blocks, three fully connected layers, and a single neuron as output. The model was trained and validated on 80% of the data using the mean-squared error as a cost function to minimize the difference between the predicted and reference bone age values through the Adam optimization algorithm. Data augmentation was applied to the training and validation sets yielded in doubling the data samples. The performance of the trained model was evaluated on a test data set (20%) using various metrics including, the mean absolute error (MAE), median absolute error (MedAE), root-mean-squared error (RMSE), and mean absolute percentage error (MAPE). The code of the developed model for predicting the bone age in this study is available publicly on GitHub at https://github.com/afiosman/deep-learning-based-bone-age-estimation .
    RESULTS: Experimental results demonstrate the sound capabilities of our model in predicting the bone age on the left-hand radiographs as in the majority of the cases, the predicted bone ages and reference bone ages are nearly close to each other with a calculated MAE of 2.3 [1.9, 2.7; 0.95 confidence level] years, MedAE of 2.1 years, RMAE of 3.0 [1.5, 4.5; 0.95 confidence level] years, and MAPE of 0.29 (29%) on the test data set.
    CONCLUSIONS: These findings highlight the usability of estimating the bone age from left-hand radiographs, helping radiologists to verify their own results considering the margin of error on the model. The performance of our proposed model could be improved with additional refining and validation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:艾伯塔省卒中计划早期CT评分(ASPECTS),使用非对比计算机断层扫描(NCCT)评估急性缺血性卒中缺血性改变的系统方法,通常是依靠专家经验来解释的,读者之间可能会有所不同。本研究旨在开发一种采用深度学习(DL)的临床适用的自动ASPECTS系统。
    方法:本研究纳入了2017年1月至2021年10月从四个中心回顾性收集的1987年NCCT扫描。在开发队列(N=1767)上训练用于自动ASPECTS评估的基于DL的系统,并在独立测试队列(N=220)上验证。经验丰富的医生的共识被视为参考标准。根据医生的读数评估了拟议系统的有效性和可靠性。对13,399名患者进行的真实世界前瞻性应用研究用于临床环境中的系统验证。
    结果:基于DL的系统在受试者工作特征曲线下的面积(AUC)为84.97%,组内相关系数(ICC)为0.84,用于测试队列的整体水平分析。对于阈值≥6的二分ASPECTS患者,该系统的诊断灵敏度为94.61%,与专家评分基本一致(ICC=0.65)。将该系统与医生相结合,将AUC从67.43%提高到89.76%,将诊断时间从130.6±66.3s减少到33.3±8.3s(p<0.001)。在临床应用期间,94.0%(12,591)的系统成功处理的扫描被临床医生利用,96%的医生承认工作效率显着提高。
    结论:提出的基于DL的系统可以准确,快速地确定ASPECTS,这可能有助于早期干预的临床工作流程。
    结论:基于深度学习的自动ASPECTS评估系统可以准确,快速地确定ASPECTS,以便在临床工作流程中进行早期干预,医生的处理时间减少了74.8%,但在临床应用中仍然需要医生的验证。
    结论:基于深度学习的ASPECTS量化系统已被证明不劣于专家评估的ASPECTS。该系统提高了ASPERTS评估的一致性,并将每次扫描的处理时间减少到33.3秒。在前瞻性临床应用期间,该系统成功处理的94.0%的扫描被临床医生利用。
    OBJECTIVE: The Alberta Stroke Program Early CT Score (ASPECTS), a systematic method for assessing ischemic changes in acute ischemic stroke using non-contrast computed tomography (NCCT), is often interpreted relying on expert experience and can vary between readers. This study aimed to develop a clinically applicable automatic ASPECTS system employing deep learning (DL).
    METHODS: This study enrolled 1987 NCCT scans that were retrospectively collected from four centers between January 2017 and October 2021. A DL-based system for automated ASPECTS assessment was trained on a development cohort (N = 1767) and validated on an independent test cohort (N = 220). The consensus of experienced physicians was regarded as a reference standard. The validity and reliability of the proposed system were assessed against physicians\' readings. A real-world prospective application study with 13,399 patients was used for system validation in clinical contexts.
    RESULTS: The DL-based system achieved an area under the receiver operating characteristic curve (AUC) of 84.97% and an intraclass correlation coefficient (ICC) of 0.84 for overall-level analysis on the test cohort. The system\'s diagnostic sensitivity was 94.61% for patients with dichotomized ASPECTS at a threshold of ≥ 6, with substantial agreement (ICC = 0.65) with expert ratings. Combining the system with physicians improved AUC from 67.43 to 89.76%, reducing diagnosis time from 130.6 ± 66.3 s to 33.3 ± 8.3 s (p < 0.001). During the application in clinical contexts, 94.0% (12,591) of scans successfully processed by the system were utilized by clinicians, and 96% of physicians acknowledged significant improvement in work efficiency.
    CONCLUSIONS: The proposed DL-based system could accurately and rapidly determine ASPECTS, which might facilitate clinical workflow for early intervention.
    CONCLUSIONS: The deep learning-based automated ASPECTS evaluation system can accurately and rapidly determine ASPECTS for early intervention in clinical workflows, reducing processing time for physicians by 74.8%, but still requires validation by physicians when in clinical applications.
    CONCLUSIONS: The deep learning-based system for ASPECTS quantification has been shown to be non-inferior to expert-rated ASPECTS. This system improved the consistency of ASPECTS evaluation and reduced processing time to 33.3 seconds per scan. 94.0% of scans successfully processed by the system were utilized by clinicians during the prospective clinical application.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    确定潜在有毒元素(PTE)的来源和空间分布是土壤污染调查的关键问题。然而,对源贡献的不确定性估计仍然缺乏,准确的空间预测仍然具有挑战性。将稳健的贝叶斯多元受体模型(RBMRM)应用于青州市土壤数据集(429个样本中有8个PTE),计算具有不确定性的源贡献。提出了多任务卷积神经网络(MTCNN)来预测土壤PTE的空间分布。RBMRM提供了三个来源,与US-EPA正矩阵分解一致。以天然来源为主的As,Cr,Cu,和镍含量(78.5%~86.1%),贡献了37.1%,61.0%,和65.9%的Cd,Pb,Zn,不确定度指数(UI)<26.7%,不确定度低。工业,交通,农业来源对Cd有显著影响,Pb,和锌(30.2%~61.9%),UI<39.3%。汞主要来自大气沉积(99.1%),不确定性相对较高(UI=87.7%)。MTCNN获得了令人满意的准确性,R2为0.357-0.896,nRMSE为0.092-0.366。As的空间分布,Cd,Cr,Cu,Ni,Pb,和Zn受母体材料的影响。Cd,Hg,Pb,锌在城市地区表现出明显的热点。这项工作进行了新的方法探索,并对土壤污染治理提出了实际意义。
    Determining sources and spatial distributions of potentially toxic elements (PTEs) is a crucial issue of soil pollution survey. However, uncertainty estimation for source contributions remains lack, and accurate spatial prediction is still challenging. Robust Bayesian multivariate receptor model (RBMRM) was applied to the soil dataset of Qingzhou City (8 PTEs in 429 samples), to calculate source contributions with uncertainties. Multi-task convolutional neural network (MTCNN) was proposed to predict spatial distributions of soil PTEs. RBMRM afforded three sources, consistent with US-EPA positive matrix factorization. Natural source dominated As, Cr, Cu, and Ni contents (78.5 %∼86.1 %), and contributed 37.1 %, 61.0 %, and 65.9 % of Cd, Pb, and Zn, exhibiting low uncertainties with uncertainty index (UI) < 26.7 %. Industrial, traffic, and agricultural sources had significant influences on Cd, Pb, and Zn (30.2 %∼61.9 %), with UI < 39.3 %. Hg originated dominantly from atmosphere deposition (99.1 %), with relatively high uncertainties (UI=87.7 %). MTCNN acquired satisfactory accuracies, with R2 of 0.357-0.896 and nRMSE of 0.092-0.366. Spatial distributions of As, Cd, Cr, Cu, Ni, Pb, and Zn were influenced by parent materials. Cd, Hg, Pb, and Zn showed significant hotspot in urban area. This work conducted a new approach exploration, and practical implications for soil pollution regulation were proposed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:腹腔镜远端胃切除术(LDG)对于早期职业外科医生来说是一项困难的手术。基于人工智能(AI)的手术步骤识别对于建立上下文感知的计算机辅助手术系统至关重要。在这项研究中,我们旨在使用AI开发LDG的自动识别模型并评估其性能。
    方法:2019年在我们机构接受LDG的患者被纳入本研究。手术视频数据分为以下九个步骤:(1)端口插入;(2)较大曲率的左侧淋巴结清扫术;(3)较大曲率的右侧淋巴结清扫术;(4)十二指肠分区;(5)胰上区淋巴结清扫术;(6)小曲率淋巴结清扫术;(7)胃部重建术;(8)重建术。两名胃外科医生手动分配所有注释标签。进一步采用基于卷积神经网络(CNN)的图像分类来识别手术步骤。
    结果:数据集包含40个LDG视频。使用了超过1,000,000个带有LDG步骤注释标签的框架来训练深度学习模型,有30和10个手术视频进行培训和验证,分别。所开发模型的分类精度是精确的,0.88;召回,0.87;F1得分,0.88;和整体精度,0.89.该模型的推理速度为32ps。
    结论:开发的CNN模型以相对较高的准确性自动识别LDG手术过程。向该模型添加更多数据可以提供可用于开发未来手术器械的基本技术。
    OBJECTIVE: Laparoscopic distal gastrectomy (LDG) is a difficult procedure for early career surgeons. Artificial intelligence (AI)-based surgical step recognition is crucial for establishing context-aware computer-aided surgery systems. In this study, we aimed to develop an automatic recognition model for LDG using AI and evaluate its performance.
    METHODS: Patients who underwent LDG at our institution in 2019 were included in this study. Surgical video data were classified into the following nine steps: (1) Port insertion; (2) Lymphadenectomy on the left side of the greater curvature; (3) Lymphadenectomy on the right side of the greater curvature; (4) Division of the duodenum; (5) Lymphadenectomy of the suprapancreatic area; (6) Lymphadenectomy on the lesser curvature; (7) Division of the stomach; (8) Reconstruction; and (9) From reconstruction to completion of surgery. Two gastric surgeons manually assigned all annotation labels. Convolutional neural network (CNN)-based image classification was further employed to identify surgical steps.
    RESULTS: The dataset comprised 40 LDG videos. Over 1,000,000 frames with annotated labels of the LDG steps were used to train the deep-learning model, with 30 and 10 surgical videos for training and validation, respectively. The classification accuracies of the developed models were precision, 0.88; recall, 0.87; F1 score, 0.88; and overall accuracy, 0.89. The inference speed of the proposed model was 32 ps.
    CONCLUSIONS: The developed CNN model automatically recognized the LDG surgical process with relatively high accuracy. Adding more data to this model could provide a fundamental technology that could be used in the development of future surgical instruments.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    闭环神经反馈训练利用诸如头皮脑电图(EEG)的神经信号来操纵特定的神经活动和相关的行为表现。使用卷积神经网络的高密度全头部头皮EEG的时空滤波器可以克服信号源的模糊性,因为每个EEG信号都包括有关远程区域的信息。我们在基于脑机接口(BCI)的神经反馈训练期间同时采集了人类的EEG和功能磁共振图像,并比较了感觉运动网络的重建和建模的血液动力学响应。用卷积神经网络构建的滤波器捕获目标网络中的活动,其空间精度和特异性优于用基于BCI的神经反馈范例中使用的标准管道预处理的EEG信号。检查训练模型的中间层以表征有助于重建的神经元振荡特征。对空间卷积层的分析揭示了分布式皮层电路对重建的贡献,包括额顶和感觉运动区域,以及成功重建血液动力学响应函数的时间卷积层。采用时空滤波器并利用在我们的中间层分析中确定的感觉运动兴奋性的电生理特征将有助于进一步有效的神经反馈干预的发展。
    Closed-loop neurofeedback training utilizes neural signals such as scalp electroencephalograms (EEG) to manipulate specific neural activities and the associated behavioral performance. A spatiotemporal filter for high-density whole-head scalp EEG using a convolutional neural network can overcome the ambiguity of the signaling source because each EEG signal includes information on the remote regions. We simultaneously acquired EEG and functional magnetic resonance images in humans during the brain-computer interface (BCI) based neurofeedback training and compared the reconstructed and modeled hemodynamic responses of the sensorimotor network. Filters constructed with a convolutional neural network captured activities in the targeted network with spatial precision and specificity superior to those of the EEG signals preprocessed with standard pipelines used in BCI-based neurofeedback paradigms. The middle layers of the trained model were examined to characterize the neuronal oscillatory features that contributed to the reconstruction. Analysis of the layers for spatial convolution revealed the contribution of distributed cortical circuitries to reconstruction, including the frontoparietal and sensorimotor areas, and those of temporal convolution layers that successfully reconstructed the hemodynamic response function. Employing a spatiotemporal filter and leveraging the electrophysiological signatures of the sensorimotor excitability identified in our middle layer analysis would contribute to the development of a further effective neurofeedback intervention.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    IBD提供者越来越多地采用肠道超声(IUS)来监测炎症性肠病(IBD),这揭示了有关标准化图像解释和作为研究工具的局限性的新挑战。人工智能方法可以帮助解决这些挑战。我们旨在确定对IUS图像进行放射学分析的可行性,并确定基于放射学的分类模型是否可以准确区分正常和异常IUS图像。我们还将比较基于影像组学的模型的性能与基于卷积神经网络(CNN)的分类模型,以了解哪种方法更有效地从IUS图像中提取有意义的信息。
    回顾性分析在常规门诊就诊期间获得的IUS图像,我们开发并测试了基于影像组学和基于CNN的模型来区分正常和异常图像,异常图像定义为肠壁厚度>3mm或肠充血,改良的Limberg评分≥1(两者均为炎症的替代标志物)。通过接收器操作曲线下面积(AUC)测量模型性能。
    对于本可行性研究,分析了125张图像(33%的异常)。使用XGBoost的基于放射学的模型产生了最佳的分类器模型,平均测试AUC为0.98%,93.8%灵敏度,93.8%的特异性,和93.7%的准确性。基于CNN的分类模型产生0.75的平均测试AUC。
    对IUS图像进行放射学分析是可行的,基于放射学的分类模型可以准确区分异常和正常图像。我们的发现建立了促进未来基于放射学的IUS研究的方法,这些方法可以帮助标准化图像解释并扩展IUS研究能力。
    UNASSIGNED: The increasing adoption of intestinal ultrasound (IUS) for monitoring inflammatory bowel diseases (IBD) by IBD providers has uncovered new challenges regarding standardized image interpretation and limitations as a research tool. Artificial intelligence approaches can help address these challenges. We aim to determine the feasibility of radiomic analysis of IUS images and to determine if a radiomics-based classification model can accurately differentiate between normal and abnormal IUS images. We will also compare the radiomic-based model\'s performance to a convolutional neural network (CNN)-based classification model to understand which method is more effective for extracting meaningful information from IUS images.
    UNASSIGNED: Retrospectively analyzing IUS images obtained during routine outpatient visits, we developed and tested radiomic-based and CNN-based models to distinguish between normal and abnormal images, with abnormal images defined as bowel wall thickness > 3 mm or bowel hyperemia with modified Limberg score ≥ 1 (both are surrogate markers for inflammation). Model performances were measured by area under the receiver operator curve (AUC).
    UNASSIGNED: For this feasibility study, 125 images (33% abnormal) were analyzed. A radiomic-based model using XG boost yielded the best classifier model with average test AUC 0.98%, 93.8% sensitivity, 93.8% specificity, and 93.7% accuracy. The CNN-based classification model yielded an average testing AUC of 0.75.
    UNASSIGNED: Radiomic analysis of IUS images is feasible, and a radiomic-based classification model could accurately differentiate abnormal from normal images. Our findings establish methods to facilitate future radiomic-based IUS studies that can help standardize image interpretation and expand IUS research capabilities.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项研究引入了一种基于深度学习的自动睡眠评分系统,用于使用单导联心电图(ECG)信号检测睡眠呼吸暂停,重点是准确估计呼吸暂停低通气指数(AHI)。与其他研究不同,这项工作强调AHI估计,对于睡眠呼吸暂停的诊断和严重程度评估至关重要。建议的模型,接受过1465次心电图记录的训练,结合睡眠呼吸暂停检测网络的深浅融合网络(DSF-SANet)和门控复发单元(GRU),以1分钟的间隔分析ECG信号,捕获睡眠相关的呼吸障碍。与实际AHI值实现0.87的相关系数,每段分类的准确度为0.82,F1评分为0.71,接受者工作特征曲线下面积为0.88,我们的模型在识别睡眠呼吸事件和估计AHI方面是有效的,为医疗专业人员提供了一个有前途的工具。
    This study introduces a deep-learning-based automatic sleep scoring system to detect sleep apnea using a single-lead electrocardiography (ECG) signal, focusing on accurately estimating the apnea-hypopnea index (AHI). Unlike other research, this work emphasizes AHI estimation, crucial for the diagnosis and severity evaluation of sleep apnea. The suggested model, trained on 1465 ECG recordings, combines the deep-shallow fusion network for sleep apnea detection network (DSF-SANet) and gated recurrent units (GRUs) to analyze ECG signals at 1-min intervals, capturing sleep-related respiratory disturbances. Achieving a 0.87 correlation coefficient with actual AHI values, an accuracy of 0.82, an F1 score of 0.71, and an area under the receiver operating characteristic curve of 0.88 for per-segment classification, our model was effective in identifying sleep-breathing events and estimating the AHI, offering a promising tool for medical professionals.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:前交叉韧带(ACL)损伤在运动中很常见,是严重的膝关节损伤,需要及时诊断。磁共振成像(MRI)是一种很强的,用于检测ACL撕裂的非侵入性工具,这需要训练才能准确阅读。在阅读MR图像方面具有不同经验的临床医生需要不同的信息来诊断ACL撕裂。人工智能(AI)图像处理可能是诊断ACL撕裂的一种有前途的方法。
    目的:这项研究试图使用AI来(1)从完整的MR图像中诊断ACL撕裂,(2)从完整的MR图像中识别撕裂的ACL图像,并诊断为ACL撕裂,和(3)将完整ACL和撕裂ACLMR图像与所选择的MR图像区分开。
    方法:回顾性收集了800例撕裂的ACL(n=1205)和完整的ACL(n=1018)的矢状MR图像以及200例(100例撕裂的ACL和100例完整的ACL)20-40岁患者的完整膝关节MR图像。使用卷积神经网络的AI方法被应用于为目标构建模型。使用200个独立病例的MR图像(100个撕裂的ACL和100个完整的ACL)作为模型的测试集。从测试集中随机选择的40例的MR图像用于比较训练模型与具有不同经验水平的临床医生之间的ACL眼泪的读取准确性。
    结果:第一个区分撕裂ACL的模型,完整的ACL,以及来自完整MR图像的其他图像,精度为0.9946,灵敏度,特异性,精度,F1评分分别为0.9344、0.9743、0.8659和0.8980。ACL撕裂诊断的最终准确性为0.96。该模型显示出比经验不足的临床医生明显更高的阅读准确性。第二个模型从完整的MR图像中识别出撕裂的ACL图像,诊断ACL撕裂的准确度为0.9943,灵敏度为,特异性,精度,F1评分分别为0.9154、0.9660、0.8167和0.8632。第三个模型区分撕裂和完整的ACL图像,精度为0.9691,灵敏度,特异性,精度,F1评分分别为0.9827、0.9519、0.9632和0.9728。
    结论:这项研究证明了使用AI方法为需要MRI诊断ACL撕裂的不同信息的临床医生提供信息的可行性。
    BACKGROUND: Anterior cruciate ligament (ACL) injuries are common in sports and are critical knee injuries that require prompt diagnosis. Magnetic resonance imaging (MRI) is a strong, noninvasive tool for detecting ACL tears, which requires training to read accurately. Clinicians with different experiences in reading MR images require different information for the diagnosis of ACL tears. Artificial intelligence (AI) image processing could be a promising approach in the diagnosis of ACL tears.
    OBJECTIVE: This study sought to use AI to (1) diagnose ACL tears from complete MR images, (2) identify torn-ACL images from complete MR images with a diagnosis of ACL tears, and (3) differentiate intact-ACL and torn-ACL MR images from the selected MR images.
    METHODS: The sagittal MR images of torn ACL (n=1205) and intact ACL (n=1018) from 800 cases and the complete knee MR images of 200 cases (100 torn ACL and 100 intact ACL) from patients aged 20-40 years were retrospectively collected. An AI approach using a convolutional neural network was applied to build models for the objective. The MR images of 200 independent cases (100 torn ACL and 100 intact ACL) were used as the test set for the models. The MR images of 40 randomly selected cases from the test set were used to compare the reading accuracy of ACL tears between the trained model and clinicians with different levels of experience.
    RESULTS: The first model differentiated between torn-ACL, intact-ACL, and other images from complete MR images with an accuracy of 0.9946, and the sensitivity, specificity, precision, and F1-score were 0.9344, 0.9743, 0.8659, and 0.8980, respectively. The final accuracy for ACL-tear diagnosis was 0.96. The model showed a significantly higher reading accuracy than less experienced clinicians. The second model identified torn-ACL images from complete MR images with a diagnosis of ACL tear with an accuracy of 0.9943, and the sensitivity, specificity, precision, and F1-score were 0.9154, 0.9660, 0.8167, and 0.8632, respectively. The third model differentiated torn- and intact-ACL images with an accuracy of 0.9691, and the sensitivity, specificity, precision, and F1-score were 0.9827, 0.9519, 0.9632, and 0.9728, respectively.
    CONCLUSIONS: This study demonstrates the feasibility of using an AI approach to provide information to clinicians who need different information from MRI to diagnose ACL tears.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号