Convolutional neural network (CNN)

卷积神经网络 (CNN)
  • 文章类型: Journal Article
    对于空间物体探测任务,传统光学相机面临各种应用挑战,包括背光问题和昏暗的光线条件。作为一种新颖的光学相机,事件摄像机由于异步输出特性而具有高时间分辨率和高动态范围的优点,这为上述挑战提供了新的解决方案。然而,事件摄像机的异步输出特性使它们与为帧图像设计的常规目标检测方法不兼容。
    提出了用于处理事件摄像机数据的异步卷积存储器网络(ACMNet),以解决背光和昏暗空间物体检测的问题。ACMNet的关键思想是首先通过指数核函数用事件尖峰张量(EST)体素网格来表征异步事件流,然后使用前馈特征提取网络提取空间特征,并使用提出的卷积时空存储器模块ConvLSTM聚合时间特征,最后,实现了使用连续事件流的端到端对象检测。
    在Event_DVS_space7上进行了ACMNet和经典对象检测方法之间的比较实验,Event_DVS_space7是基于事件摄像机的大规模空间合成事件数据集。结果表明,ACMNet的性能优于其他ACMNet,mAP提高了12.7%,同时保持了处理速度。此外,事件摄像机在传统光学摄像机出现故障的背光和昏暗光线条件下仍然具有良好的性能。这项研究为在复杂的照明和运动条件下进行检测提供了一种新颖的可能性,强调事件相机在空间物体检测领域的优势。
    UNASSIGNED: For space object detection tasks, conventional optical cameras face various application challenges, including backlight issues and dim light conditions. As a novel optical camera, the event camera has the advantages of high temporal resolution and high dynamic range due to asynchronous output characteristics, which provides a new solution to the above challenges. However, the asynchronous output characteristic of event cameras makes them incompatible with conventional object detection methods designed for frame images.
    UNASSIGNED: Asynchronous convolutional memory network (ACMNet) for processing event camera data is proposed to solve the problem of backlight and dim space object detection. The key idea of ACMNet is to first characterize the asynchronous event streams with the Event Spike Tensor (EST) voxel grid through the exponential kernel function, then extract spatial features using a feed-forward feature extraction network, and aggregate temporal features using a proposed convolutional spatiotemporal memory module ConvLSTM, and finally, the end-to-end object detection using continuous event streams is realized.
    UNASSIGNED: Comparison experiments among ACMNet and classical object detection methods are carried out on Event_DVS_space7, which is a large-scale space synthetic event dataset based on event cameras. The results show that the performance of ACMNet is superior to the others, and the mAP is improved by 12.7% while maintaining the processing speed. Moreover, event cameras still have a good performance in backlight and dim light conditions where conventional optical cameras fail. This research offers a novel possibility for detection under intricate lighting and motion conditions, emphasizing the superior benefits of event cameras in the realm of space object detection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在靶向治疗前非侵入性检测肺腺癌患者的表皮生长因子受体(EGFR)突变状态仍然是一个挑战。这项研究旨在开发基于3维(3D)卷积神经网络(CNN)的深度学习模型,以使用计算机断层扫描(CT)图像预测EGFR突变状态。
    我们回顾性地从2个大型医疗中心收集了660名患者。根据医院来源将患者分为训练(n=528)和外部测试(n=132)组。CNN模型是以有监督的端到端方式训练的,并使用外部测试集评估其性能。为了比较CNN模型的性能,我们构建了1个临床和3个影像组学模型.此外,我们构建了一个综合模型,该模型结合了性能最高的影像组学和CNN模型.接收器工作特性(ROC)曲线用作每个模型的性能的主要量度。Delong测试用于比较不同模型之间的性能差异。
    与临床[训练集相比,曲线下面积(AUC)=69.6%,95%置信区间(CI),0.661-0.732;试验装置,AUC=68.4%,95%CI,0.609-0.752]和性能最高的影像组学模型(训练集,AUC=84.3%,95%CI,0.812-0.873;测试集,AUC=72.4%,95%CI,0.653-0.794)模型,CNN模型(训练集,AUC=94.3%,95%CI,0.920-0.961;测试集,AUC=94.7%,95%CI,0.894-0.978)对预测EGFR突变状态具有显著更好的预测性能。此外,与综合模型(训练集,AUC=95.7%,95%CI,0.942-0.971;测试集,AUC=87.4%,95%CI,0.820-0.924),CNN模型具有较好的稳定性。
    CNN模型在非侵入性预测肺腺癌患者的EGFR突变状态方面具有出色的性能,有望成为临床医生的辅助工具。
    UNASSIGNED: Noninvasively detecting epidermal growth factor receptor (EGFR) mutation status in lung adenocarcinoma patients before targeted therapy remains a challenge. This study aimed to develop a 3-dimensional (3D) convolutional neural network (CNN)-based deep learning model to predict EGFR mutation status using computed tomography (CT) images.
    UNASSIGNED: We retrospectively collected 660 patients from 2 large medical centers. The patients were divided into training (n=528) and external test (n=132) sets according to hospital source. The CNN model was trained in a supervised end-to-end manner, and its performance was evaluated using an external test set. To compare the performance of the CNN model, we constructed 1 clinical and 3 radiomics models. Furthermore, we constructed a comprehensive model combining the highest-performing radiomics and CNN models. The receiver operating characteristic (ROC) curves were used as primary measures of performance for each model. Delong test was used to compare performance differences between different models.
    UNASSIGNED: Compared with the clinical [training set, area under the curve (AUC) =69.6%, 95% confidence interval (CI), 0.661-0.732; test set, AUC =68.4%, 95% CI, 0.609-0.752] and the highest-performing radiomics models (training set, AUC =84.3%, 95% CI, 0.812-0.873; test set, AUC =72.4%, 95% CI, 0.653-0.794) models, the CNN model (training set, AUC =94.3%, 95% CI, 0.920-0.961; test set, AUC =94.7%, 95% CI, 0.894-0.978) had significantly better predictive performance for predicting EGFR mutation status. In addition, compared with the comprehensive model (training set, AUC =95.7%, 95% CI, 0.942-0.971; test set, AUC =87.4%, 95% CI, 0.820-0.924), the CNN model had better stability.
    UNASSIGNED: The CNN model has excellent performance in non-invasively predicting EGFR mutation status in patients with lung adenocarcinoma and is expected to become an auxiliary tool for clinicians.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    长期以来,维护通信网络的安全性一直是一个主要问题。由于物联网(IoT)等新通信架构的出现以及渗透技术的进步和复杂性,这个问题变得越来越重要。对于在基于物联网的网络中的使用,以前的入侵检测系统(IDS),通常使用集中式设计来识别威胁,现在是无效的。为了解决这些问题,这项研究提出了一种新的和协作的方法,物联网入侵检测,可能有助于解决某些当前的安全问题。建议的方法通过使用黑洞优化(BHO)来选择最能描述对象之间通信的最重要的属性。此外,提出了一种基于矩阵的网络通信特性描述方法。建议的入侵检测模型的输入由这两个特征集组成。所建议的技术使用软件定义网络(SDN)将网络分成多个子网。每个子网的监控由控制器节点完成,它使用卷积神经网络(PCNN)的并行组合来确定通过其子网的流量中是否存在安全威胁。所提出的方法还将多数投票方法用于控制器节点的协作,以便更准确地检测攻击。研究结果表明,与以前的方法相比,建议的合作策略可以检测NSLKDD和NSW-NB15数据集中的攻击,准确率为99.89%和97.72%,分别。这至少是0.6%的改善。
    Maintaining security in communication networks has long been a major concern. This issue has become increasingly crucial due to the emergence of new communication architectures like the Internet of Things (IoT) and the advancement and complexity of infiltration techniques. For usage in networks based on the Internet of Things, previous intrusion detection systems (IDSs), which often use a centralized design to identify threats, are now ineffective. For the resolution of these issues, this study presents a novel and cooperative approach to IoT intrusion detection that may be useful in resolving certain current security issues. The suggested approach chooses the most important attributes that best describe the communication between objects by using Black Hole Optimization (BHO). Additionally, a novel method for describing the network\'s matrix-based communication properties is put forward. The inputs of the suggested intrusion detection model consist of these two feature sets. The suggested technique splits the network into a number of subnets using the software-defined network (SDN). Monitoring of each subnet is done by a controller node, which uses a parallel combination of convolutional neural networks (PCNN) to determine the presence of security threats in the traffic passing through its subnet. The proposed method also uses the majority voting approach for the cooperation of controller nodes in order to more accurately detect attacks. The findings demonstrate that, in comparison to the prior approaches, the suggested cooperative strategy can detect assaults in the NSLKDD and NSW-NB15 datasets with an accuracy of 99.89 and 97.72 percent, respectively. This is a minimum 0.6 percent improvement.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    酸化会降低白云石的强度和质量,在某种程度上,损害隧道围岩的稳定性作为不利的地质边界。砂质白云石的沙化程度分级是复杂地理环境下隧道掘进等岩土工程项目面临的重要挑战之一。传统的定量测量物理参数或分析某些视觉特征的方法在实际使用中要么耗时要么不准确。为了解决这些问题,我们,第一次,将基于卷积神经网络(CNN)的图像分类方法引入到白云石沙化程度分类任务中。在这项研究中,我们通过建立包含5729张图像的大规模数据集做出了重大贡献,将砂质白云岩分为四种不同的沙化程度。这些图像是从中国CYWD项目玉溪段的隧道附近收集的。我们使用这个数据集进行了全面的分类实验。这些实验的结果表明了基于CNN的模型的开创性成就,实现了高达91.4%的令人印象深刻的准确率。这一成就强调了我们在创建该数据集方面的工作的先锋作用及其在复杂地理分析中的应用潜力。
    Sandification can degrade the strength and quality of dolomite, and to a certain extent, compromise the stability of a tunnel\'s surrounding rock as an unfavorable geological boundary. Sandification degree classification of sandy dolomite is one of the non-trivial challenges faced by geotechnical engineering projects such as tunneling in complex geographical environments. The traditional methods quantitatively measuring the physical parameters or analyzing some visual features are either time-consuming or inaccurate in practical use. To address these issues, we, for the first time, introduce the convolutional neural network (CNN)-based image classification methods into dolomite sandification degree classification task. In this study, we have made a significant contribution by establishing a large-scale dataset comprising 5729 images, classified into four distinct sandification degrees of sandy dolomite. These images were collected from the vicinity of a tunnel located in the Yuxi section of the CYWD Project in China. We conducted comprehensive classification experiments using this dataset. The results of these experiments demonstrate the groundbreaking achievement of CNN-based models, which achieved an impressive accuracy rate of up to 91.4%. This accomplishment underscores the pioneering role of our work in creating this dataset and its potential for applications in complex geographical analyses.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    当用户担心驾驶员/车辆安全时,驾驶员监控系统(DMS)在自动驾驶系统(ADS)中至关重要。在DMS中,驾驶员/车辆安全的重要影响因素是驾驶员分心或活动的分类。驾驶员的分心或活动将有意义的信息传达给ADS,在实时车辆驾驶中提高驾驶员/车辆的安全性。由于人类驾驶的不可预测的性质,对驾驶员分心或活动的分类具有挑战性。本文提出了一种嵌入视觉几何组(CBAMVGG16)深度学习架构的卷积块注意力模块,以提高驾驶员分心的分类性能。所提出的CBAMVGG16架构是CBAM层与常规VGG16网络层的混合网络。在传统VGG16架构中加入CBAM层,增强了模型的特征提取能力,改善了驾驶员分心分类结果。为了验证我们提出的CBAMVGG16架构的显著性能,我们在开罗美国大学(AUC)分散驱动程序数据集版本2(AUCD2)上测试了我们的模型,用于摄像机1和2图像.我们的实验结果表明,所提出的CBAMVGG16架构对摄像机1的分类准确率为98.65%,对摄像机2的AUCD2数据集的分类准确率为97.85%。CBAMVGG16架构还将驱动程序分心分类性能与DenseNet121、Xception、MobleNetV2、InceptionV3和VGG16架构基于所提出的模型的准确性,损失,精度,F1得分,召回,和混乱矩阵。驾驶员分心分类结果表明,与传统的VGG16深度学习分类模型相比,拟议的CBAMVGG16对AUCD2相机1图像有3.7%的分类改进,对相机2图像有5%的分类改进。我们还使用不同的超参数值测试了我们提出的体系结构,并估算了最佳驾驶员分心分类的最佳值。数据增强技术对于CBAMVGG16模型的数据多样性性能的重要性也在过拟合方案方面得到了验证。在我们的研究中也考虑了我们提出的CBAMVGG16架构的Grad-CAM可视化,结果表明,没有CBAM层的VGG16体系结构不太关注驾驶员分心图像的基本部分。此外,我们用模型参数的数量测试了我们提出的CBAMVGG16架构的有效分类性能,型号尺寸,各种输入图像分辨率,交叉验证,贝叶斯搜索优化和不同的CBAM层。结果表明,我们提出的架构中的CBAM层增强了传统VGG16架构的分类性能,并优于最先进的深度学习架构。
    Driver monitoring systems (DMS) are crucial in autonomous driving systems (ADS) when users are concerned about driver/vehicle safety. In DMS, the significant influencing factor of driver/vehicle safety is the classification of driver distractions or activities. The driver\'s distractions or activities convey meaningful information to the ADS, enhancing the driver/ vehicle safety in real-time vehicle driving. The classification of driver distraction or activity is challenging due to the unpredictable nature of human driving. This paper proposes a convolutional block attention module embedded in Visual Geometry Group (CBAM VGG16) deep learning architecture to improve the classification performance of driver distractions. The proposed CBAM VGG16 architecture is the hybrid network of the CBAM layer with conventional VGG16 network layers. Adding a CBAM layer into a traditional VGG16 architecture enhances the model\'s feature extraction capacity and improves the driver distraction classification results. To validate the significant performance of our proposed CBAM VGG16 architecture, we tested our model on the American University in Cairo (AUC) distracted driver dataset version 2 (AUCD2) for cameras 1 and 2 images. Our experiment results show that the proposed CBAM VGG16 architecture achieved 98.65% classification accuracy for camera 1 and 97.85% for camera 2 AUCD2 datasets. The CBAM VGG16 architecture also compared the driver distraction classification performance with DenseNet121, Xception, MoblieNetV2, InceptionV3, and VGG16 architectures based on the proposed model\'s accuracy, loss, precision, F1 score, recall, and confusion matrix. The drivers\' distraction classification results indicate that the proposed CBAM VGG16 has 3.7% classification improvements for AUCD2 camera 1 images and 5% for camera 2 images compared to the conventional VGG16 deep learning classification model. We also tested our proposed architecture with different hyperparameter values and estimated the optimal values for best driver distraction classification. The significance of data augmentation techniques for the data diversity performance of the CBAM VGG16 model is also validated in terms of overfitting scenarios. The Grad-CAM visualization of our proposed CBAM VGG16 architecture is also considered in our study, and the results show that VGG16 architecture without CBAM layers is less attentive to the essential parts of the driver distraction images. Furthermore, we tested the effective classification performance of our proposed CBAM VGG16 architecture with the number of model parameters, model size, various input image resolutions, cross-validation, Bayesian search optimization and different CBAM layers. The results indicate that CBAM layers in our proposed architecture enhance the classification performance of conventional VGG16 architecture and outperform the state-of-the-art deep learning architectures.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的/背景宫颈癌仍然是女性癌症相关死亡的重要原因。尤其是在缺乏筛查和随访护理的低资源环境中.转录因子锌指E盒结合同源盒2(ZEB2)已被确定为宫颈癌组织中肿瘤侵袭性和癌症进展的潜在标志物。方法本研究提出了一种基于ZEB2表达对宫颈癌图像进行分类的混合深度学习系统。该系统集成了多个卷积神经网络模型-EfficientNet,DenseNet,和使用InceptionNet的合奏投票。我们利用梯度加权类激活映射(Grad-CAM)可视化技术来提高卷积神经网络决策的可解释性。数据集由649张注释图像组成,分为训练,验证,和测试集。结果混合模型在测试集上的分类准确率为94.4%。Grad-CAM可视化提供了对模型的决策过程的见解,强调对ZEB2表达水平进行分类至关重要的图像区域。结论提出的混合深度学习模型为基于ZEB2表达的宫颈癌分类提供了一种有效且可解释的方法。这种方法有可能大大有助于早期诊断,从而潜在地提高患者的治疗效果并降低医疗成本。未来的努力将集中在提高模型的准确性,并研究其对其他癌症类型的适用性。
    Aims/Background Cervical cancer continues to be a significant cause of cancer-related deaths among women, especially in low-resource settings where screening and follow-up care are lacking. The transcription factor zinc finger E-box-binding homeobox 2 (ZEB2) has been identified as a potential marker for tumour aggressiveness and cancer progression in cervical cancer tissues. Methods This study presents a hybrid deep learning system developed to classify cervical cancer images based on ZEB2 expression. The system integrates multiple convolutional neural network models-EfficientNet, DenseNet, and InceptionNet-using ensemble voting. We utilised the gradient-weighted class activation mapping (Grad-CAM) visualisation technique to improve the interpretability of the decisions made by the convolutional neural networks. The dataset consisted of 649 annotated images, which were divided into training, validation, and testing sets. Results The hybrid model exhibited a high classification accuracy of 94.4% on the test set. The Grad-CAM visualisations offered insights into the model\'s decision-making process, emphasising the image regions crucial for classifying ZEB2 expression levels. Conclusion The proposed hybrid deep learning model presents an effective and interpretable method for the classification of cervical cancer based on ZEB2 expression. This approach holds the potential to substantially aid in early diagnosis, thereby potentially enhancing patient outcomes and mitigating healthcare costs. Future endeavours will concentrate on enhancing the model\'s accuracy and investigating its applicability to other cancer types.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    液-液相分离(LLPS)调节许多生物过程,包括RNA代谢,染色质重排,和信号转导。异常LLPS可能导致严重的疾病。因此,LLPS蛋白的鉴定至关重要。传统上,基于生物化学的鉴定LLPS蛋白的方法成本很高,耗时,和费力。相比之下,基于人工智能的方法是快速和具有成本效益的,可以更好地替代基于生物化学的方法。以前的研究方法采用word2vec与机器学习或深度学习算法相结合。尽管word2vec捕获了单词的语义和关系,它可能无法有效捕获与蛋白质分类相关的特征,比如物理化学性质,进化关系,或结构特征。此外,其他研究通常集中在一组有限的模型训练特征上,包括平面π接触频率,pi-pi,和β-配对倾向。为了克服这些缺点,这项研究首先构建了一个包含1206个蛋白质序列的可靠数据集,包括603个LLPS和603个非LLPS蛋白序列。然后提出了一种计算模型,通过直接感知蛋白质序列的语义信息来有效识别LLPS蛋白质;使用基于变压器架构的ESM2-36预训练模型结合卷积神经网络。该模型可以达到85.86%和89.26%的精度,分别在训练数据和测试数据上,超过了以往研究的准确性。该性能证明了我们的计算方法作为鉴定LLPS蛋白的有效替代方案的潜力。
    Liquid-liquid phase separation (LLPS) regulates many biological processes including RNA metabolism, chromatin rearrangement, and signal transduction. Aberrant LLPS potentially leads to serious diseases. Therefore, the identification of the LLPS proteins is crucial. Traditionally, biochemistry-based methods for identifying LLPS proteins are costly, time-consuming, and laborious. In contrast, artificial intelligence-based approaches are fast and cost-effective and can be a better alternative to biochemistry-based methods. Previous research methods employed word2vec in conjunction with machine learning or deep learning algorithms. Although word2vec captures word semantics and relationships, it might not be effective in capturing features relevant to protein classification, like physicochemical properties, evolutionary relationships, or structural features. Additionally, other studies often focused on a limited set of features for model training, including planar π contact frequency, pi-pi, and β-pairing propensities. To overcome such shortcomings, this study first constructed a reliable dataset containing 1206 protein sequences, including 603 LLPS and 603 non-LLPS protein sequences. Then a computational model was proposed to efficiently identify the LLPS proteins by perceiving semantic information of protein sequences directly; using an ESM2-36 pre-trained model based on transformer architecture in conjunction with a convolutional neural network. The model could achieve an accuracy of 85.68% and 89.67%, respectively on training data and test data, surpassing the accuracy of previous studies. The performance demonstrates the potential of our computational methods as efficient alternatives for identifying LLPS proteins.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在水下图像处理过程中,图像质量受光在水中的吸收和散射的影响,从而导致模糊和噪声等问题。因此,图像质量差是不可避免的。取得总体满意的研究成果,水下图像去噪至关重要。本文提出了一种水下图像去噪方法,名为HHDNet,旨在解决水下机器人摄影过程中环境干扰和技术限制引起的噪声问题。该方法利用双分支网络架构来处理高频和低频,结合了专门为去除水下图像中的高频突变噪声而设计的混合注意模块。使用高斯内核将输入图像分解为高频和低频分量。对于高频部分,具有混合注意力机制的全局上下文提取器(GCE)模块侧重于通过同时捕获局部细节和全局依赖性来去除高频突变信号。对于低频部分,考虑到较少的噪声信息,使用有效的残差卷积单元。实验结果表明,HHDNet能够有效地实现水下图像去噪,超越其他现有方法,不仅在去噪效果上,而且在保持计算效率上,因此,HHDNet在水下图像噪声去除方面提供了更大的灵活性。
    During underwater image processing, image quality is affected by the absorption and scattering of light in water, thus causing problems such as blurring and noise. As a result, poor image quality is unavoidable. To achieve overall satisfying research results, underwater image denoising is vital. This paper presents an underwater image denoising method, named HHDNet, designed to address noise issues arising from environmental interference and technical limitations during underwater robot photography. The method leverages a dual-branch network architecture to handle both high and low frequencies, incorporating a hybrid attention module specifically designed for the removal of high-frequency abrupt noise in underwater images. Input images are decomposed into high-frequency and low-frequency components using a Gaussian kernel. For the high-frequency part, a Global Context Extractor (GCE) module with a hybrid attention mechanism focuses on removing high-frequency abrupt signals by capturing local details and global dependencies simultaneously. For the low-frequency part, efficient residual convolutional units are used in consideration of less noise information. Experimental results demonstrate that HHDNet effectively achieves underwater image denoising tasks, surpassing other existing methods not only in denoising effectiveness but also in maintaining computational efficiency, and thus HHDNet provides more flexibility in underwater image noise removal.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    “刚刚接受”的论文经过了全面的同行评审,并已被接受发表在放射学:人工智能。本文将进行文案编辑,布局,并在最终版本发布之前进行验证审查。请注意,在制作最终的文案文章期间,可能会发现可能影响内容的错误。目的评估基于深度学习的胸部影像学年龄(CXR-Age)模型在亚洲个体大型外部测试队列中的预后价值。材料和方法这个单中心,回顾性研究包括连续的胸部X光片,在2004年1月至2018年6月期间接受健康检查的年龄为50~80岁的无症状亚裔个体.这项研究对以前开发的CXR-Age模型进行了专门的外部测试,它根据全因死亡率的风险预测年龄。CXR-全因年龄的调整后危险比(HR),心血管,肺癌,使用多变量Cox或Fine-Gray模型评估呼吸系统疾病死亡率,通过似然比检验评估它们的附加值。结果共36,924例(平均实际年龄±SD,58±7岁;CXR年龄,60±5岁;22,352名男性)包括在内。平均随访11.0年,1250人(3.4%)死亡,包括153例心血管疾病(0.4%),166例肺癌(0.4%),和98例呼吸道死亡(0.3%)。CXR-年龄是全因的重要危险因素(50岁时的调整后HR:1.03;60岁时:1.05;70岁时:1.07),心血管(调整后的HR:1.11),肺癌(以前吸烟者的调整后HR:1.12;目前吸烟者:1.05),和呼吸系统疾病死亡率(校正HR:1.12)(所有P值<0.05)。似然比测试表明,CXR-Age对临床因素(包括所有结局的实际年龄)具有额外的预后价值(所有P值<0.001)。结论基于深度学习的胸部X线年龄与各种生存结果相关,并且对无症状亚洲个体的临床因素具有附加价值。表明了它的普遍性。©RSNA,2024.
    Purpose To assess the prognostic value of a deep learning-based chest radiographic age (hereafter, CXR-Age) model in a large external test cohort of Asian individuals. Materials and Methods This single-center, retrospective study included chest radiographs from consecutive, asymptomatic Asian individuals aged 50-80 years who underwent health checkups between January 2004 and June 2018. This study performed a dedicated external test of a previously developed CXR-Age model, which predicts an age adjusted based on the risk of all-cause mortality. Adjusted hazard ratios (HRs) of CXR-Age for all-cause, cardiovascular, lung cancer, and respiratory disease mortality were assessed using multivariable Cox or Fine-Gray models, and their added values were evaluated by likelihood ratio tests. Results A total of 36 924 individuals (mean chronological age, 58 years ± 7 [SD]; CXR-Age, 60 years ± 5; 22 352 male) were included. During a median follow-up of 11.0 years, 1250 individuals (3.4%) died, including 153 cardiovascular (0.4%), 166 lung cancer (0.4%), and 98 respiratory (0.3%) deaths. CXR-Age was a significant risk factor for all-cause (adjusted HR at chronological age of 50 years, 1.03; at 60 years, 1.05; at 70 years, 1.07), cardiovascular (adjusted HR, 1.11), lung cancer (adjusted HR for individuals who formerly smoked, 1.12; for those who currently smoke, 1.05), and respiratory disease (adjusted HR, 1.12) mortality (P < .05 for all). The likelihood ratio test demonstrated added prognostic value of CXR-Age to clinical factors, including chronological age for all outcomes (P < .001 for all). Conclusion Deep learning-based chest radiographic age was associated with various survival outcomes and had added value to clinical factors in asymptomatic Asian individuals, suggesting its generalizability. Keywords: Conventional Radiography, Thorax, Heart, Lung, Mediastinum, Outcomes Analysis, Quantification, Prognosis, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Adams and Bressem in this issue.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    癫痫是全球最著名的神经系统疾病之一,导致个体突然癫痫发作并显著影响他们的生活质量。因此,迫切需要一种有效的方法来检测和预测癫痫发作,以减轻癫痫患者面临的风险。在本文中,提出了一种新的癫痫发作检测和预测方法,基于多类特征融合和卷积神经网络门控循环单元注意机制(CNN-GRU-AM)模型。最初,脑电图(EEG)信号通过离散小波变换(DWT)进行小波分解,产生六个子带。随后,从每个子带中提取时频域和非线性特征。最后,CNN-GRU-AM进一步提取特征并执行分类。CHB-MIT数据集用于验证所提出的方法。十倍交叉验证结果表明,我们的方法达到了99.24%和95.47%的灵敏度,特异性为99.51%和94.93%,准确率为99.35%和95.16%,在癫痫发作检测和预测任务中,AUC分别为99.34%和95.15%,分别。结果表明,本文提出的方法能够有效地实现癫痫发作的高精度检测和预测,以便提醒患者和医生及时采取防护措施。
    Epilepsy is one of the most well-known neurological disorders globally, leading to individuals experiencing sudden seizures and significantly impacting their quality of life. Hence, there is an urgent necessity for an efficient method to detect and predict seizures in order to mitigate the risks faced by epilepsy patients. In this paper, a new method for seizure detection and prediction is proposed, which is based on multi-class feature fusion and the convolutional neural network-gated recurrent unit-attention mechanism (CNN-GRU-AM) model. Initially, the Electroencephalography (EEG) signal undergoes wavelet decomposition through the Discrete Wavelet Transform (DWT), resulting in six subbands. Subsequently, time-frequency domain and nonlinear features are extracted from each subband. Finally, the CNN-GRU-AM further extracts features and performs classification. The CHB-MIT dataset is used to validate the proposed approach. The results of tenfold cross validation show that our method achieved a sensitivity of 99.24% and 95.47%, specificity of 99.51% and 94.93%, accuracy of 99.35% and 95.16%, and an AUC of 99.34% and 95.15% in seizure detection and prediction tasks, respectively. The results show that the method proposed in this paper can effectively achieve high-precision detection and prediction of seizures, so as to remind patients and doctors to take timely protective measures.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号