Convolutional neural network

卷积神经网络
  • 文章类型: Journal Article
    目的:本研究旨在设计一种基于卷积神经网络的自动描绘模型,用于在图像引导的自适应近距离放射治疗中生成高风险的临床目标体积和风险器官。
    方法:使用CT扫描对98例接受图像引导自适应近距离放射治疗的局部晚期宫颈癌患者进行了新的SERes-u-net训练和测试。骰子相似系数,95百分位数Hausdorff距离,和临床评估用于评估。
    结果:我们模型的平均Dice相似系数为80.8%,91.9%,85.2%,60.4%,高风险临床目标量为82.8%,膀胱,直肠,乙状结肠,和肠循环,分别。对应的95百分位数Hausdorff距离为5.23mm,4.75mm,4.06mm,30.0mm,和20.5毫米。评估结果显示,99.3%的卷积神经网络生成的高风险临床目标体积切片对于肿瘤学家A是可接受的,对于肿瘤学家B是100%。除了25%的乙状结肠,这需要对肿瘤学家A的意见进行重大修订。两位肿瘤学家对卷积神经网络生成的高风险临床目标体积的临床评估存在显着差异(P<0.001),而两位肿瘤学家的危险器官评分差异不显著.在一致性评价中,观察到高级和初级临床医生之间存在很大差异。初级临床医生认为大约40%的SERes-u-net生成的轮廓更好。
    结论:提出的卷积神经网络模型产生的宫颈癌高危临床靶区和器官可用于临床,潜在改善图像引导自适应近距离放射治疗工作流程中的分割一致性和轮廓效率。
    OBJECTIVE: This study aimed to design an autodelineation model based on convolutional neural networks for generating high-risk clinical target volumes and organs at risk in image-guided adaptive brachytherapy for cervical cancer.
    METHODS: A novel SERes-u-net was trained and tested using CT scans from 98 patients with locally advanced cervical cancer who underwent image-guided adaptive brachytherapy. The Dice similarity coefficient, 95th percentile Hausdorff distance, and clinical assessment were used for evaluation.
    RESULTS: The mean Dice similarity coefficients of our model were 80.8%, 91.9%, 85.2%, 60.4%, and 82.8% for the high-risk clinical target volumes, bladder, rectum, sigmoid, and bowel loops, respectively. The corresponding 95th percentile Hausdorff distances were 5.23mm, 4.75mm, 4.06mm, 30.0mm, and 20.5mm. The evaluation results revealed that 99.3% of the convolutional neural networks-generated high-risk clinical target volumes slices were acceptable for oncologist A and 100% for oncologist B. Most segmentations of the organs at risk were clinically acceptable, except for the 25% sigmoid, which required significant revision in the opinion of oncologist A. There was a significant difference in the clinical evaluation of convolutional neural networks-generated high-risk clinical target volumes between the two oncologists (P<0.001), whereas the score differences of the organs at risk were not significant between the two oncologists. In the consistency evaluation, a large discrepancy was observed between senior and junior clinicians. About 40% of SERes-u-net-generated contours were thought to be better by junior clinicians.
    CONCLUSIONS: The high-risk clinical target volumes and organs at risk of cervical cancer generated by the proposed convolutional neural networks model can be used clinically, potentially improving segmentation consistency and efficiency of contouring in image-guided adaptive brachytherapy workflow.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:心肌声学造影(MCE)在诊断缺血中起着至关重要的作用。梗塞,肿块和其他心脏病。在MCE图像分析领域,准确和一致的心肌分割结果对于实现各种心脏疾病的自动分析至关重要。然而,当前MCE中的手动诊断方法的可重复性差,临床适用性有限。由于超声信号的不稳定性,MCE图像往往表现出低质量和高噪声,而干扰结构会进一步破坏分割的一致性。
    方法:为了克服这些挑战,我们提出了一个用于MCE分割的深度学习网络。这种架构利用扩张卷积来捕获大规模信息,而不牺牲位置准确性,并修改多头自我注意以增强全局上下文并确保一致性,有效地克服了与低图像质量和干扰相关的问题。此外,我们还调整了变压器与卷积神经网络的级联应用,以改善MCE中的分割。
    结果:在我们的实验中,与几种最先进的分割模型相比,我们的架构在标准MCE视图中获得了84.35%的最佳Dice评分.对于具有干扰结构(质量)的非标准视图和框架,我们的模型还获得了83.33%和83.97%的最佳骰子得分,分别。
    结论:这些研究证明我们的架构具有出色的形状一致性和坚固性,这使得它能够处理各种类型的MCE的分割。我们相对精确和一致的心肌分割结果为自动分析各种心脏病提供了基本条件,有可能发现潜在的病理特征并降低医疗保健成本。
    OBJECTIVE: Myocardial contrast echocardiography (MCE) plays a crucial role in diagnosing ischemia, infarction, masses and other cardiac conditions. In the realm of MCE image analysis, accurate and consistent myocardial segmentation results are essential for enabling automated analysis of various heart diseases. However, current manual diagnostic methods in MCE suffer from poor repeatability and limited clinical applicability. MCE images often exhibit low quality and high noise due to the instability of ultrasound signals, while interference structures can further disrupt segmentation consistency.
    METHODS: To overcome these challenges, we proposed a deep-learning network for the segmentation of MCE. This architecture leverages dilated convolutions to capture high-scale information without sacrificing positional accuracy and modifies multi-head self-attention to enhance global context and ensure consistency, effectively overcoming issues related to low image quality and interference. Furthermore, we also adapted the cascade application of transformers with convolutional neural networks for improved segmentation in MCE.
    RESULTS: In our experiments, our architecture achieved the best Dice score of 84.35% for standard MCE views compared with that of several state-of-the-art segmentation models. For non-standard views and frames with interfering structures (mass), our models also attained the best Dice scores of 83.33% and 83.97%, respectively.
    CONCLUSIONS: These studies proved that our architecture is of excellent shape consistency and robustness, which allows it to deal with segmentation of various types of MCE. Our relatively precise and consistent myocardial segmentation results provide fundamental conditions for the automated analysis of various heart diseases, with the potential to discover underlying pathological features and reduce healthcare costs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    监测叶绿素a浓度(Chl-a,水生生态系统中的μg·L-1)由于与有害藻华直接相关而备受关注。然而,一直缺乏一种经济有效的方法来测量小水体中的Chl-a。灵感来自智能手机摄影的增加,开发了基于智能手机的卷积神经网络(CNN)框架(SCCA)来估计水生生态系统中的Chl-a。为了评估SCCA的性能,从不同的水生生态系统中收集了238条配对记录(带有12色背景和测得的Chl-a值的智能手机图像)(例如,河流,湖泊和池塘)在2023年在中国各地。我们的性能评估结果显示,Chl-a估计的NS和R2值为0.90和0.94,在较低的Chl-a(<30μgL-1)条件下,证明了令人满意的(NS=0.84,R2=0.86)模型拟合。SCCA采用了超参数优化技术的实时更新方法。与现有的Chl-a测量方法相比,SCCA为Chl-a的经济有效测量提供了有用的筛选工具,并且有可能成为小水体中的藻类水华筛选手段,以华锦河为例,特别是在水资源测量有限的情况下。总的来说,我们强调,SCCA将来可能会集成到智能手机应用程序中,以适应环境管理中的各种水体。
    Monitoring chlorophyll-a concentrations (Chl-a, μg·L-1) in aquatic ecosystems has attracted much attention due to its direct link to harmful algal blooms. However, there has been a lack of a cost-effective method for measuring Chl-a in small waterbodies. Inspired by the increase of smartphone photography, a Smartphone-based convolutional neural networks (CNN) framework (SCCA) was developed to estimate Chl-a in Aquatic ecosystem. To evaluate the performance of SCCA, 238 paired records (a smartphone image with a 12-color background and a measured Chl-a value) were collected from diverse aquatic ecosystems (e.g., rivers, lakes and ponds) across China in 2023. Our performance-evaluation results revealed a NS and R2 value of 0.90 and 0.94 in Chl-a estimation, demonstrating a satisfactory (NS = 0.84, R2 = 0.86) model fit in lower Chl-a (<30 μg L-1) conditions. SCCA had involved a realtime-update method with hyperparameter optimization technology. In comparison with the existing methods of measuring Chl-a, SCCA provides a useful screening tool for cost-effective measurement of Chl-a and has the potential for being an algal bloom screening means in small waterbodies, using Huajin River as a case study, especially under limited resources for water measurement. Overall, we highlight that the SCCA can be potentially integrated into a smartphone application in the future to diverse waterbodies in environmental management.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    对图像进行分类是计算机视觉中最重要的任务之一。最近,图像分类任务的最佳性能已由深度和良好连接的网络显示。这些天,大多数数据集由固定数量的彩色图像组成。输入图像采用红绿蓝(RGB)格式,并进行分类,而不对原始图像进行任何更改。观察到颜色空间(基本上改变原始RGB图像)对分类精度有重大影响,我们深入研究颜色空间的意义。此外,具有高度可变数量的类的数据集,例如PlantVillage数据集利用在同一模型中包含大量颜色空间的模型,达到很高的精度,和不同类别的图像更好地表示在不同的颜色空间。此外,我们证明了这种类型的模型,其中输入被同时预处理到许多颜色空间中,需要更少的参数来实现分类的高精度。所提出的模型基本上以RGB图像作为输入,一次把它变成七个独立的颜色空间,然后将这些颜色空间中的每一个都输入到自己的卷积神经网络(CNN)模型中。为了减轻计算机的负载和所需的超参数数量,我们在提出的CNN模型中使用组卷积层。与目前最先进的作物病害分类方法相比,我们取得了实质性的进展。
    Classifying images is one of the most important tasks in computer vision. Recently, the best performance for image classification tasks has been shown by networks that are both deep and well-connected. These days, most datasets are made up of a fixed number of color images. The input images are taken in red green blue (RGB) format and classified without any changes being made to the original. It is observed that color spaces (basically changing original RGB images) have a major impact on classification accuracy, and we delve into the significance of color spaces. Moreover, datasets with a highly variable number of classes, such as the PlantVillage dataset utilizing a model that incorporates numerous color spaces inside the same model, achieve great levels of accuracy, and different classes of images are better represented in different color spaces. Furthermore, we demonstrate that this type of model, in which the input is preprocessed into many color spaces simultaneously, requires significantly fewer parameters to achieve high accuracy for classification. The proposed model basically takes an RGB image as input, turns it into seven separate color spaces at once, and then feeds each of those color spaces into its own Convolutional Neural Network (CNN) model. To lessen the load on the computer and the number of hyperparameters needed, we employ group convolutional layers in the proposed CNN model. We achieve substantial gains over the present state-of-the-art methods for the classification of crop disease.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    步态识别,生物识别方法,由于其独特的属性而引起了极大的关注,包括非侵入性,远距离捕获,以及对模仿的抵制。在深度学习从数据中提取复杂特征的卓越能力的推动下,步态识别经历了一场革命。本文概述了基于深度学习的步态识别方法的当前发展。我们探索和分析步态识别的发展,并强调其在取证中的用途,安全,和刑事调查。这篇文章深入探讨了与步态识别相关的挑战,例如步行条件的变化,视角,衣服也是。我们通过对最先进的架构进行全面分析,讨论深度神经网络在解决这些挑战方面的有效性。包括卷积神经网络(CNN),递归神经网络(RNN),注意机制。多种基于神经网络的步态识别模型,如门控和共享注意力ICDNet(GA-ICDNet),多尺度时间特征提取器(MSTFE),GaitNet,和各种基于CNN的方法,在不同的步行条件下表现出令人印象深刻的准确性,展示这些模型在捕捉独特步态模式方面的有效性。GaitNet实现了99.7%的卓越识别精度,而GA-ICDNet在验证任务中表现出高精度,误差率为0.67%。GaitGraph(ResGCN+2DCNN)实现了从66.3%到87.7%的秩1精度,而与Koopman运营商的完全连接网络在各种条件下实现了OU-MVLP的平均秩1精度74.7%。然而,利用图卷积网络(GCN)和GaitSet的GCPFP(具有基于图卷积的零件特征轮询的GCN)对于CASIA-B实现了62.4%的最低平均秩1精度,而MFINet(多因素推理网络)在CASIA-B上的服装变化条件下表现出11.72%至19.32%的最低精度范围。除了全面分析最近在步态识别方面的突破,还评估了未来潜在研究方向的范围。
    Gait recognition, a biometric identification method, has garnered significant attention due to its unique attributes, including non-invasiveness, long-distance capture, and resistance to impersonation. Gait recognition has undergone a revolution driven by the remarkable capacity of deep learning to extract complicated features from data. An overview of the current developments in deep learning-based gait identification methods is provided in this work. We explore and analyze the development of gait recognition and highlight its uses in forensics, security, and criminal investigations. The article delves into the challenges associated with gait recognition, such as variations in walking conditions, viewing angles, and clothing as well. We discuss about the effectiveness of deep neural networks in addressing these challenges by providing a comprehensive analysis of state-of-the-art architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and attention mechanisms. Diverse neural network-based gait recognition models, such as Gate Controlled and Shared Attention ICDNet (GA-ICDNet), Multi-Scale Temporal Feature Extractor (MSTFE), GaitNet, and various CNN-based approaches, demonstrate impressive accuracy across different walking conditions, showcasing the effectiveness of these models in capturing unique gait patterns. GaitNet achieved an exceptional identification accuracy of 99.7%, whereas GA-ICDNet showed high precision with an equal error rate of 0.67% in verification tasks. GaitGraph (ResGCN+2D CNN) achieved rank-1 accuracies ranging from 66.3% to 87.7%, whereas a Fully Connected Network with Koopman Operator achieved an average rank-1 accuracy of 74.7% for OU-MVLP across various conditions. However, GCPFP (GCN with Graph Convolution-Based Part Feature Polling) utilizing graph convolutional network (GCN) and GaitSet achieves the lowest average rank-1 accuracy of 62.4% for CASIA-B, while MFINet (Multiple Factor Inference Network) exhibits the lowest accuracy range of 11.72% to 19.32% under clothing variation conditions on CASIA-B. In addition to an across-the-board analysis of recent breakthroughs in gait recognition, the scope for potential future research direction is also assessed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    手势是一种有效的沟通工具,可以在各个领域传达丰富的信息,包括医疗和教育。电子学习在过去几年中取得了显着增长,现在已成为许多企业的重要资源。尽管如此,关于在电子学习中使用手势的研究并不多。与此类似,医疗专业人员经常使用手势来帮助诊断和治疗。
    我们的目标是改进教师的方式,学生,和医疗专业人员通过引入动态的手势监测和识别方法来接收信息。六个模块组成了我们的方法:视频到帧转换,质量增强的预处理,手骨架映射与单发多盒检测器(SSMD)跟踪,使用背景建模和卷积神经网络(CNN)边界框技术的手检测,使用基于点的和全手覆盖技术的特征提取,并使用基于种群的增量学习算法进行优化。接下来,1DCNN分类器用于识别手部运动。
    经过大量的试验和错误,我们能够在印度手语和WLASL数据集上获得83.71%和85.71%的手跟踪精度,分别。我们的发现表明了我们的方法识别手部动作的效果。
    教师,学生,和医疗专业人员都可以通过利用我们建议的系统有效地传输和理解信息。获得的准确率凸显了我们的方法如何改善通信并使各个领域的信息交换更加容易。
    UNASSIGNED: Hand gestures are an effective communication tool that may convey a wealth of information in a variety of sectors, including medical and education. E-learning has grown significantly in the last several years and is now an essential resource for many businesses. Still, there has not been much research conducted on the use of hand gestures in e-learning. Similar to this, gestures are frequently used by medical professionals to help with diagnosis and treatment.
    UNASSIGNED: We aim to improve the way instructors, students, and medical professionals receive information by introducing a dynamic method for hand gesture monitoring and recognition. Six modules make up our approach: video-to-frame conversion, preprocessing for quality enhancement, hand skeleton mapping with single shot multibox detector (SSMD) tracking, hand detection using background modeling and convolutional neural network (CNN) bounding box technique, feature extraction using point-based and full-hand coverage techniques, and optimization using a population-based incremental learning algorithm. Next, a 1D CNN classifier is used to identify hand motions.
    UNASSIGNED: After a lot of trial and error, we were able to obtain a hand tracking accuracy of 83.71% and 85.71% over the Indian Sign Language and WLASL datasets, respectively. Our findings show how well our method works to recognize hand motions.
    UNASSIGNED: Teachers, students, and medical professionals can all efficiently transmit and comprehend information by utilizing our suggested system. The obtained accuracy rates highlight how our method might improve communication and make information exchange easier in various domains.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项研究调查了使用磁共振成像(MRI)对颞下颌关节(TMJ)积液进行基于深度学习的自动检测的实用性,以及在提供患者临床信息时模型的诊断准确性是否提高MRI图像。从1017名女性和457名男性(平均年龄37.19±18.64岁)收集了2948名TMJ的矢状MR图像。三个卷积神经网络的TMJ积液诊断性能(划痕,微调,和冻结方案)根据曲线下面积(AUC)和诊断准确性与人类专家进行了比较。具有质子密度(PD)图像的微调模型显示出可接受的预测性能(AUC=0.7895),从零开始(0.6193)和冷冻(0.6149)模型表现出较低的性能(p<0.05)。与人类专家相比,微调模型具有出色的特异性(87.25%vs.58.17%)。然而,人类专家的灵敏度更高(80.00%vs.57.43%)(所有p<0.001)。在梯度加权类激活映射(Grad-CAM)可视化中,微调方案更侧重于渗出性,而不是TMJ的其他结构,稀疏度高于从头开始方案(82.40%vs.49.83%,p<0.05)。Grad-CAM可视化与通过TMJ区域的重要特征学习的模型一致,特别是在关节盘周围。PD和T2加权图像上的两个微调模型表明,与单独使用PD相比,诊断性能没有改善(p<0.05)。当根据年龄(0.7083-0.8375)和性别(男性:0.7576,女性:0.7083)对患者进行分组时,在每个组中观察到不同的AUC。当使用所有数据时,集成模型的预测精度高于人类专家的预测精度(74.21%vs.67.71%,p<0.05)。开发了深度神经网络(DNN)来处理多模态数据,包括MRI和患者临床资料。用DNN模型对四个年龄组的分析显示,41-60岁年龄组的表现最好(AUC=0.8258)。微调模型和DNN是判断TMJ积液的最佳选择,可用于防止真正的阴性病例并帮助人类诊断性能。辅助自动诊断方法有可能提高临床医生的诊断准确性。
    This study investigated the usefulness of deep learning-based automatic detection of temporomandibular joint (TMJ) effusion using magnetic resonance imaging (MRI) in patients with temporomandibular disorder and whether the diagnostic accuracy of the model improved when patients\' clinical information was provided in addition to MRI images. The sagittal MR images of 2948 TMJs were collected from 1017 women and 457 men (mean age 37.19 ± 18.64 years). The TMJ effusion diagnostic performances of three convolutional neural networks (scratch, fine-tuning, and freeze schemes) were compared with those of human experts based on areas under the curve (AUCs) and diagnosis accuracies. The fine-tuning model with proton density (PD) images showed acceptable prediction performance (AUC = 0.7895), and the from-scratch (0.6193) and freeze (0.6149) models showed lower performances (p < 0.05). The fine-tuning model had excellent specificity compared to the human experts (87.25% vs. 58.17%). However, the human experts were superior in sensitivity (80.00% vs. 57.43%) (all p < 0.001). In gradient-weighted class activation mapping (Grad-CAM) visualizations, the fine-tuning scheme focused more on effusion than on other structures of the TMJ, and the sparsity was higher than that of the from-scratch scheme (82.40% vs. 49.83%, p < 0.05). The Grad-CAM visualizations agreed with the model learned through important features in the TMJ area, particularly around the articular disc. Two fine-tuning models on PD and T2-weighted images showed that the diagnostic performance did not improve compared with using PD alone (p < 0.05). Diverse AUCs were observed across each group when the patients were divided according to age (0.7083-0.8375) and sex (male:0.7576, female:0.7083). The prediction accuracy of the ensemble model was higher than that of the human experts when all the data were used (74.21% vs. 67.71%, p < 0.05). A deep neural network (DNN) was developed to process multimodal data, including MRI and patient clinical data. Analysis of four age groups with the DNN model showed that the 41-60 age group had the best performance (AUC = 0.8258). The fine-tuning model and DNN were optimal for judging TMJ effusion and may be used to prevent true negative cases and aid in human diagnostic performance. Assistive automated diagnostic methods have the potential to increase clinicians\' diagnostic accuracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    客观 呼吸运动,心脏运动,和固有的低信噪比(SNR)是体内心脏扩散张量成像(DTI)的主要限制。我们提出了一种新颖的增强方法,该方法使用基于无监督学习的可逆小波散射(IWS)来改善体内心脏DTI的质量。&#xD;&#xD;方法&#xD;我们的方法首先使用多尺度小波散射(WS)从多个心脏扩散加权(DW)图像采集中提取几乎变换不变的特征。通过多尺度编码器和解码器网络来学习WS系数和DW图像之间的关系。使用经过训练的编码器,进一步提取多个DW图像采集的WS系数的深层特征,然后使用平均规则进行融合。最后,使用融合的WS特征和经过训练的解码器,导出增强的DW图像。&#xD;&#xD;主要结果&#xD;我们通过在SNR方面与三个体内心脏DTI数据集上的几种方法进行比较来评估所提出方法的性能,对比度噪声比(CNR),分数各向异性(FA),平均扩散率(MD),和螺旋角(HA)。与最佳比较方法相比,舒张压的SNR/CNR,胃蠕动受影响,收缩末期DW图像改善了1%/16%,5%/6%,和56%/30%,分别。与这项工作中使用的比较方法相比,该方法还产生了一致的FA和MD值以及更连贯的螺旋纤维结构。&#xD;&#xD;意义&#xD;消融结果验证了使用变换不变和噪声鲁棒的小波散射特征可以从有限的数据中有效地探索有用的信息。这提供了一种潜在的手段来减轻融合结果对重复采集次数的依赖性,这有利于同时处理噪声和残余运动问题,从而提高体内心脏DTI的质量。
    Objective Respiratory motion, cardiac motion, and inherently low signal-to-noise ratio (SNR) are major limitations of in vivo cardiac diffusion tensor imaging (DTI). We propose a novel enhancement method that uses unsupervised learning based invertible wavelet scattering (IWS) to improve the quality of in vivo cardiac DTI. Approach Our method starts by extracting nearly transformation-invariant features from multiple cardiac diffusion-weighted (DW) image acquisitions using multi-scale wavelet scattering (WS). The relationship between the WS coefficients and DW images is learned through a multiscale encoder and a decoder network. Using the trained encoder, the deep features of WS coefficients of multiple DW image acquisitions are further extracted and then fused using an average rule. Finally, using the fused WS features and trained decoder, the enhanced DW images are derived. Main Results We evaluated the performance of the proposed method by comparing it with several methods on three in vivo cardiac DTI datasets in terms of SNR, contrast to noise ratio (CNR), fractional anisotropy (FA), mean diffusivity (MD), and helix angle (HA). Compared to the best comparison method, SNR/CNR of diastolic, gastric peristalsis influenced, and end systolic DW images were improved by 1%/16%, 5%/6%, and 56%/30%, respectively. The approach also yielded consistent FA and MD values and more coherent helical fiber structures than the comparison methods used in this work. Significance The ablation results verify that using the transformation-invariant and noise-robust wavelet scattering features enables effective exploration of useful information from limited data. This provides a potential means to alleviate the dependence of the fusion results on the number of repeated acquisitions, which is beneficial for dealing with noise and residual motion issues simultaneously, thereby improving the quality of in vivo cardiac DTI.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    癫痫,这与神经元损伤和功能衰退有关,通常会给患者带来日常生活中的许多挑战。早期诊断在控制病情和减轻患者痛苦中起着至关重要的作用。基于脑电图(EEG)的方法由于其有效性和非侵入性而通常用于诊断癫痫。在这项研究中,提出了一种分类方法,该方法使用快速傅里叶变换(FFT)提取结合卷积神经网络(CNN)和长短期记忆(LSTM)模型。
    大多数方法使用传统框架对癫痫进行分类,我们提出了一种新的方法来解决这个问题,即从源数据中提取特征,然后将它们输入到网络中进行训练和识别。它将源数据预处理为训练和验证数据,然后使用CNN和LSTM对数据的样式进行分类。
    在分析公共测试数据集时,用于癫痫分类的全CNN嵌套LSTM模型中表现最好的特征是3种特征中的FFT特征.值得注意的是,所有进行的实验都有很高的准确率,准确度超过96%的值,93%的灵敏度,和96%的特异性。这些结果进一步以当前的方法为基准,在所有试验中展示一致和强大的性能。我们的方法始终如一地实现了超过97.00%的准确率,在单个实验中的值范围为97.95%至99.83%。特别值得注意的是,我们的方法在AB与(与)CDE比较,注册为99.06%。
    我们的方法具有区分癫痫和非癫痫个体的精确分类能力,无论参与者的眼睛是闭上还是睁开。此外,我们的技术在有效地对癫痫类型进行分类方面显示出显著的性能,区分癫痫发作和发作间状态与非癫痫状态。我们的自动分类方法的固有优点是其能够忽略在闭眼或睁眼状态期间获取的EEG数据。这种创新为现实世界的应用带来了希望,可能帮助医疗专业人员更有效地诊断癫痫。
    UNASSIGNED: Epilepsy, which is associated with neuronal damage and functional decline, typically presents patients with numerous challenges in their daily lives. An early diagnosis plays a crucial role in managing the condition and alleviating the patients\' suffering. Electroencephalogram (EEG)-based approaches are commonly employed for diagnosing epilepsy due to their effectiveness and non-invasiveness. In this study, a classification method is proposed that use fast Fourier Transform (FFT) extraction in conjunction with convolutional neural networks (CNN) and long short-term memory (LSTM) models.
    UNASSIGNED: Most methods use traditional frameworks to classify epilepsy, we propose a new approach to this problem by extracting features from the source data and then feeding them into a network for training and recognition. It preprocesses the source data into training and validation data and then uses CNN and LSTM to classify the style of the data.
    UNASSIGNED: Upon analyzing a public test dataset, the top-performing features in the fully CNN nested LSTM model for epilepsy classification are FFT features among three types of features. Notably, all conducted experiments yielded high accuracy rates, with values exceeding 96% for accuracy, 93% for sensitivity, and 96% for specificity. These results are further benchmarked against current methodologies, showcasing consistent and robust performance across all trials. Our approach consistently achieves an accuracy rate surpassing 97.00%, with values ranging from 97.95 to 99.83% in individual experiments. Particularly noteworthy is the superior accuracy of our method in the AB versus (vs.) CDE comparison, registering at 99.06%.
    UNASSIGNED: Our method exhibits precise classification abilities distinguishing between epileptic and non-epileptic individuals, irrespective of whether the participant\'s eyes are closed or open. Furthermore, our technique shows remarkable performance in effectively categorizing epilepsy type, distinguishing between epileptic ictal and interictal states versus non-epileptic conditions. An inherent advantage of our automated classification approach is its capability to disregard EEG data acquired during states of eye closure or eye-opening. Such innovation holds promise for real-world applications, potentially aiding medical professionals in diagnosing epilepsy more efficiently.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    药物机制的快速鉴定对于化学疗法的开发和有效使用至关重要。在这里,我们开发了一种多通道表面增强拉曼散射(SERS)传感器阵列,并应用深度学习方法来实现对各种化疗药物机制的快速识别。通过实施一系列具有不同分子特征的自组装单层(SAM),以促进界面处的异质物理化学相互作用,该传感器可以生成多样化的SERS特征,用于直接进行高维指纹识别药物诱导的细胞分子变化。我们在多维SAM调制的SERS数据集上进一步训练卷积神经网络模型,并达到99%的判别精度。我们希望这样的平台将有助于扩展药物筛选和表征的工具箱,并促进药物开发过程。
    Rapid identification of drug mechanisms is vital to the development and effective use of chemotherapeutics. Herein, we develop a multichannel surface-enhanced Raman scattering (SERS) sensor array and apply deep learning approaches to realize the rapid identification of the mechanisms of various chemotherapeutic drugs. By implementing a series of self-assembled monolayers (SAMs) with varied molecular characteristics to promote heterogeneous physicochemical interactions at the interfaces, the sensor can generate diversified SERS signatures for directly high-dimensionality fingerprinting drug-induced molecular changes in cells. We further train the convolutional neural network model on the multidimensional SAM-modulated SERS data set and achieve a discriminatory accuracy toward 99%. We expect that such a platform will contribute to expanding the toolbox for drug screening and characterization and facilitate the drug development process.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号