Convolutional neural networks

卷积神经网络
  • 文章类型: Journal Article
    对演变或共存的特发性(IIH)和自发性颅内低血压(SIH)的漏诊通常是Chiari畸形(CM)大孔减压后症状持续或恶化的原因。我们首次在文献中探讨了人工智能(AI)/卷积神经网络(CNN)在ChiariI畸形中的联合作用,探索上游和下游磁共振发现作为CM-1的初始筛查剖面。我们还对CM的所有现有亚型进行了综述,并讨论了直立(重力辅助)磁共振成像(MRI)在评估平躺MRI上模棱两可的扁桃体下降中的作用。我们使用上游和下游分析器制定了工作流算法MaChiP1.0(ManjilaChiariProtocol1.0),导致ChiariI畸形从头或恶化,我们计划使用AI实现。
    PRISMA指南用于PubMed数据库文章中的“CM和机器学习和CNN”,遇到了四篇针对该主题的文章。IIH和SIH的放射学标准来自神经外科文献,它们适用于原发性和继发性(获得性)ChiariI畸形。使用现有的文献来表征上游病因,例如IIH或SIH,以及脊柱中孤立的下游病因。我们建议对IIH和SIH分别使用四个选定的标准,大脑和脊柱的MRIT2图像,大脑上游病因中主要是矢状序列,脊柱病变中主要是多平面MRI。
    使用MaChiP1.0(专利/版权未决)概念,我们已经提出了与渐进性ChiariI畸形有关的上游和下游剖面。上游分析器包括大脑下垂的发现,第三心室底的斜率,桥脑间角,mamillopontinedistance,侧脑室角,大脑内静脉-Galen角静脉,和iter的位移,clivus长度,扁桃体下降,等。,暗示SIH。在上游病理中注意到的IIH特征是眼球后部变平,部分空的西拉,视神经鞘变形,和MRI中的视神经弯曲。下游病因涉及硬膜撕裂引起的脊髓脑脊液(CSF)渗漏,脑膜憩室,脑脊液静脉瘘,等。
    人工智能将有助于提供上游和下游病因谱的预测性分析,确保治疗继发性(获得性)ChiariI畸形的安全性和有效性,尤其是与IIH和SIH共存。MaChiP1.0算法可以帮助记录先前诊断的CM-1的恶化,并找到继发性CM-I的确切病因。然而,后颅窝形态测量和cine-flowMRI数据对颅内CSF血流动力学的作用,随着先进的脊髓CSF研究使用动态脊髓CT扫描在继发性CM-I的形成仍在评估中。
    UNASSIGNED: Missed diagnosis of evolving or coexisting idiopathic (IIH) and spontaneous intracranial hypotension (SIH) is often the reason for persistent or worsening symptoms after foramen magnum decompression for Chiari malformation (CM) I. We explore the role of artificial intelligence (AI)/convolutional neural networks (CNN) in Chiari I malformation in a combinatorial role for the first time in literature, exploring both upstream and downstream magnetic resonance findings as initial screening profilers in CM-1. We have also put together a review of all existing subtypes of CM and discuss the role of upright (gravity-aided) magnetic resonance imaging (MRI) in evaluating equivocal tonsillar descent on a lying-down MRI. We have formulated a workflow algorithm MaChiP 1.0 (Manjila Chiari Protocol 1.0) using upstream and downstream profilers, that cause de novo or worsening Chiari I malformation, which we plan to implement using AI.
    UNASSIGNED: The PRISMA guidelines were used for \"CM and machine learning and CNN\" on PubMed database articles, and four articles specific to the topic were encountered. The radiologic criteria for IIH and SIH were applied from neurosurgical literature, and they were applied between primary and secondary (acquired) Chiari I malformations. An upstream etiology such as IIH or SIH and an isolated downstream etiology in the spine were characterized using the existing body of literature. We propose the utility of using four selected criteria for IIH and SIH each, over MRI T2 images of the brain and spine, predominantly sagittal sequences in upstream etiology in the brain and multiplanar MRI in spinal lesions.
    UNASSIGNED: Using MaChiP 1.0 (patent/ copyright pending) concepts, we have proposed the upstream and downstream profilers implicated in progressive Chiari I malformation. The upstream profilers included findings of brain sagging, slope of the third ventricular floor, pontomesencephalic angle, mamillopontine distance, lateral ventricular angle, internal cerebral vein-vein of Galen angle, and displacement of iter, clivus length, tonsillar descent, etc., suggestive of SIH. The IIH features noted in upstream pathologies were posterior flattening of globe of the eye, partial empty sella, optic nerve sheath distortion, and optic nerve tortuosity in MRI. The downstream etiologies involved spinal cerebrospinal fluid (CSF) leak from dural tear, meningeal diverticula, CSF-venous fistulae, etc.
    UNASSIGNED: AI would help offer predictive analysis along the spectrum of upstream and downstream etiologies, ensuring safety and efficacy in treating secondary (acquired) Chiari I malformation, especially with coexisting IIH and SIH. The MaChiP 1.0 algorithm can help document worsening of a previously diagnosed CM-1 and find the exact etiology of a secondary CM-I. However, the role of posterior fossa morphometry and cine-flow MRI data for intracranial CSF flow dynamics, along with advanced spinal CSF studies using dynamic myelo-CT scanning in the formation of secondary CM-I is still being evaluated.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    奶牛准确的品种鉴定对于优化畜群管理和提高遗传标准至关重要。正确识别表型相似品种的智能方法可以使农民提高牧群生产力。开发了基于卷积神经网络(CNN)的模型,用于识别Sahiwal和RedSindhi奶牛。为了提高分类精度,首先,使用CNN模型从背景中分割奶牛的像素。使用这个分割的图像,通过保留原始图像中的母牛\'像素,同时消除背景,产生了蒙版图像。为了提高分类精度,在每头牛的四个不同图像上训练模型:前视图,侧视图,灰度前视图,和灰度侧视图。这些视图的掩蔽图像被馈送到预测输入图像类别的多输入CNN模型。分割模型实现了交叉联合(IoU)和F1得分值分别为81.75%和85.26%,推理时间分别为296毫秒。对于分类任务,MobileNet和EfficientNet模型的多个变体与预先训练的权重一起用作骨干。MobileNet模型对两个品种都实现了80.0%的准确率,而MobileNetV2和MobileNetV3的准确率达到了82.0%。以EfficientNet为骨干的CNN模型优于MobileNet模型,精度范围从84.0%到86.0%。这些模型的F1得分高于83.0%,表明有效的品种分类,假阳性和假阴性较少。因此,本研究表明,深度学习模型可以有效地用于识别表型相似的牛品种。为了准确识别Zebu品种,这项研究将减少农民对专家的依赖。
    Accurate breed identification in dairy cattle is essential for optimizing herd management and improving genetic standards. A smart method for correctly identifying phenotypically similar breeds can empower farmers to enhance herd productivity. A convolutional neural network (CNN) based model was developed for the identification of Sahiwal and Red Sindhi cows. To increase the classification accuracy, first, cows\'s pixels were segmented from the background using CNN model. Using this segmented image, a masked image was produced by retaining cows\' pixels from the original image while eliminating the background. To improve the classification accuracy, models were trained on four different images of each cow: front view, side view, grayscale front view, and grayscale side view. The masked images of these views were fed to the multi-input CNN model which predicts the class of input images. The segmentation model achieved intersection-over-union (IoU) and F1-score values of 81.75% and 85.26%, respectively with an inference time of 296 ms. For the classification task, multiple variants of MobileNet and EfficientNet models were used as the backbone along with pre-trained weights. The MobileNet model achieved 80.0% accuracy for both breeds, while MobileNetV2 and MobileNetV3 reached 82.0% accuracy. CNN models with EfficientNet as backbones outperformed MobileNet models, with accuracy ranging from 84.0% to 86.0%. The F1-scores for these models were found to be above 83.0%, indicating effective breed classification with fewer false positives and negatives. Thus, the present study demonstrates that deep learning models can be used effectively to identify phenotypically similar-looking cattle breeds. To accurately identify zebu breeds, this study will reduce the dependence of farmers on experts.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    说服技术,关于健康工作场所的人为因素工程要求,在确保改变人类行为方面发挥了重要作用。健康的工作场所建议适用于身体姿势的不同最佳实践,接近计算机系统,运动,照明条件,计算机系统布局,以及其他重要的心理和认知方面。最重要的是,身体姿势建议用户应该如何在工作场所坐或站,以符合最佳和健康的做法。在这项研究中,我们使用两个深度学习模型开发了两个研究阶段(试点和主要):卷积神经网络(CNN)和Yolo-V3。为了训练这两个模型,我们从创意通用许可证YouTube视频和Kaggle中收集了姿势数据集。我们将数据集分为舒适和不舒适的姿势。结果表明,我们的YOLO-V3模型优于CNN模型,平均精度为92%。基于这一发现,我们建议将YOLO-V3模型集成到健康工作场所的说服技术设计中。此外,考虑到用户在健康工作场所中应保持的理想厘米数,我们为集成接近检测提供了未来的启示。
    Persuasive technologies, in connection with human factor engineering requirements for healthy workplaces, have played a significant role in ensuring a change in human behavior. Healthy workplaces suggest different best practices applicable to body posture, proximity to the computer system, movement, lighting conditions, computer system layout, and other significant psychological and cognitive aspects. Most importantly, body posture suggests how users should sit or stand in workplaces in line with best and healthy practices. In this study, we developed two study phases (pilot and main) using two deep learning models: convolutional neural networks (CNN) and Yolo-V3. To train the two models, we collected posture datasets from creative common license YouTube videos and Kaggle. We classified the dataset into comfortable and uncomfortable postures. Results show that our YOLO-V3 model outperformed CNN model with a mean average precision of 92%. Based on this finding, we recommend that YOLO-V3 model be integrated in the design of persuasive technologies for a healthy workplace. Additionally, we provide future implications for integrating proximity detection taking into consideration the ideal number of centimeters users should maintain in a healthy workplace.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能(AI)是一项划时代的技术,其中最先进的两个部分是机器学习和深度学习算法,这些算法是机器学习进一步发展的,并已部分应用于EUS诊断。据报道,AI辅助EUS诊断在胰腺肿瘤和慢性胰腺炎的诊断中具有重要价值,胃肠道间质瘤,早期食管癌,胆道,和肝脏病变。人工智能在EUS诊断中的应用还存在一些亟待解决的问题。首先,敏感AI诊断工具的开发需要大量高质量的训练数据。第二,当前的人工智能算法存在过拟合和偏差,导致诊断可靠性差。第三,人工智能的价值仍需要在前瞻性研究中确定。第四,人工智能的道德风险需要考虑和避免。
    Artificial intelligence (AI) is an epoch-making technology, among which the 2 most advanced parts are machine learning and deep learning algorithms that have been further developed by machine learning, and it has been partially applied to assist EUS diagnosis. AI-assisted EUS diagnosis has been reported to have great value in the diagnosis of pancreatic tumors and chronic pancreatitis, gastrointestinal stromal tumors, esophageal early cancer, biliary tract, and liver lesions. The application of AI in EUS diagnosis still has some urgent problems to be solved. First, the development of sensitive AI diagnostic tools requires a large amount of high-quality training data. Second, there is overfitting and bias in the current AI algorithms, leading to poor diagnostic reliability. Third, the value of AI still needs to be determined in prospective studies. Fourth, the ethical risks of AI need to be considered and avoided.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    恶性胶质瘤易于快速生长并浸润周围组织是全球关注的主要公共卫生问题。肿瘤的准确分级可以判断肿瘤的恶性程度,从而制定最佳治疗方案,可以消除肿瘤或限制肿瘤的广泛转移,挽救病人的生命,改善他们的预后。为了更准确地预测胶质瘤的分级,我们提出了一种新的方法,结合二维和三维卷积神经网络的优势,通过磁共振成像的多模态肿瘤分级。创新的核心在于我们将从多模态数据中提取的肿瘤3D信息与从2DResNet50架构中获得的信息相结合。它既解决了2D卷积神经网络中3D成像提供的时空信息的不足,又避免了3D卷积神经网络中过多信息带来的更多噪声,这导致严重的过拟合问题。结合明确的肿瘤3D信息,如肿瘤体积和表面积,提高了分级模型的性能,并解决了这两种方法的局限性。通过融合来自多种模式的信息,该模型实现了更精确和准确的肿瘤表征。模型I使用两个公开的脑胶质瘤数据集进行了训练和评估,在验证集上实现0.9684的AUC。通过热图增强了模型的可解释性,突出了肿瘤区域。所提出的方法有望在肿瘤分级中的临床应用,并有助于医学诊断领域的预测。
    It\'s a major public health problem of global concern that malignant gliomas tend to grow rapidly and infiltrate surrounding tissues. Accurate grading of the tumor can determine the degree of malignancy to formulate the best treatment plan, which can eliminate the tumor or limit widespread metastasis of the tumor, saving the patient\'s life and improving their prognosis. To more accurately predict the grading of gliomas, we proposed a novel method of combining the advantages of 2D and 3D Convolutional Neural Networks for tumor grading by multimodality on Magnetic Resonance Imaging. The core of the innovation lies in our combination of tumor 3D information extracted from multimodal data with those obtained from a 2D ResNet50 architecture. It solves both the lack of temporal-spatial information provided by 3D imaging in 2D convolutional neural networks and avoids more noise from too much information in 3D convolutional neural networks, which causes serious overfitting problems. Incorporating explicit tumor 3D information, such as tumor volume and surface area, enhances the grading model\'s performance and addresses the limitations of both approaches. By fusing information from multiple modalities, the model achieves a more precise and accurate characterization of tumors. The model I s trained and evaluated using two publicly available brain glioma datasets, achieving an AUC of 0.9684 on the validation set. The model\'s interpretability is enhanced through heatmaps, which highlight the tumor region. The proposed method holds promise for clinical application in tumor grading and contributes to the field of medical diagnostics for prediction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    精神分裂症(SZ)是一种严重的,没有特殊治疗的慢性精神障碍。由于SZ在社会中的患病率越来越高,并且这种疾病的特征与双相情感障碍等其他精神疾病相似,大多数人没有意识到它在他们的日常生活中。因此,早期发现这种疾病将使患者寻求治疗或至少控制它。以前通过机器学习方法进行的SZ检测研究,需要在分类过程之前提取和选择特征。这项研究试图开发一种小说,基于15层卷积神经网络(CNN)和16层CNN-长短期记忆(LSTM)的端到端方法,以帮助精神科医生自动诊断SZ脑电图(EEG)信号。深度模型使用CNN层来学习信号的时间属性,而LSTM层提供序列学习机制。此外,在训练集上采用基于生成对抗网络的数据增强方法来增加数据的多样性。大型EEG数据集上的结果表明,两种提出的方法都具有很高的诊断潜力,达到98%和99%的显著精度。这项研究表明,所提出的框架能够准确地将SZ与健康受试者区分开来,并且可能对开发SZ障碍的诊断工具有用。
    Schizophrenia (SZ) is a severe, chronic mental disorder without specific treatment. Due to the increasing prevalence of SZ in societies and the similarity of the characteristics of this disease with other mental illnesses such as bipolar disorder, most people are not aware of having it in their daily lives. Therefore, early detection of this disease will allow the sufferer to seek treatment or at least control it. Previous SZ detection studies through machine learning methods, require the extraction and selection of features before the classification process. This study attempts to develop a novel, end-to-end approach based on a 15-layers convolutional neural network (CNN) and a 16-layers CNN- long short-term memory (LSTM) to help psychiatrists automatically diagnose SZ from electroencephalogram (EEG) signals. The deep model uses CNN layers to learn the temporal properties of the signals, while LSTM layers provide the sequence learning mechanism. Also, data augmentation method based on generative adversarial networks is employed over the training set to increase the diversity of the data. Results on a large EEG dataset show the high diagnostic potential of both proposed methods, achieving remarkable accuracy of 98% and 99%. This study shows that the proposed framework is able to accurately discriminate SZ from healthy subject and is potentially useful for developing diagnostic tools for SZ disorder.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    无损检测(NDT)是一种用于检查材料及其缺陷而不会损坏被测组件的技术。相控阵超声检测(PAUT)已成为工业无损检测应用中的热门话题。目前,超声数据的收集大部分是自动化的,而对数据的分析仍然主要是手动进行的。手动分析扫描图像缺陷效率低,容易出现不稳定,促使人们需要基于计算机的解决方案。基于深度学习的对象检测方法最近在解决此类挑战方面表现出了希望。这种方法通常需要大量的高分辨率,注释好的训练数据,这在无损检测中很难获得。因此,它变得难以检测低分辨率图像和具有变化的位置尺寸的缺陷。这项工作提出了基于最先进的YOLOv8算法的改进,以提高相控阵超声检测中缺陷检测的准确性和效率。引入空间深度卷积(SPD-Conv)来代替跨步卷积,减少卷积操作过程中的信息损失,提高低分辨率图像的检测性能。此外,本文构建了双层路由和空间注意力模块(BRSA)并将其整合到主干中,生成具有更丰富细节的多尺度特征图。在颈部,用渐近特征金字塔网络(AFPN)代替原来的结构,以减少模型参数和计算复杂度。在公共数据集上测试后,与YOLOv8(基线)相比,该算法在模拟数据集上实现了平底孔(FBH)和铝块的高质量检测。更重要的是,对于具有挑战性的检测缺陷侧钻孔(SDH),它实现了82.50%的F1得分(准确率和召回率的加权平均值)和65.96%的联合交集(IOU),分别改善17.56%和0.43%。在实验数据集上,FBH的F1得分和IOU分别达到75.68%(增加9.01%)和83.79%,分别。同时,所提出的算法在存在外部噪声的情况下表现出鲁棒性能,同时保持极高的计算效率和推理速度。这些实验结果验证了所提出的超声图像智能缺陷检测算法的高检测性能,这有助于智能行业的发展。
    Non-destructive testing (NDT) is a technique for inspecting materials and their defects without causing damage to the tested components. Phased array ultrasonic testing (PAUT) has emerged as a hot topic in industrial NDT applications. Currently, the collection of ultrasound data is mostly automated, while the analysis of the data is still predominantly carried out manually. Manual analysis of scan image defects is inefficient and prone to instability, prompting the need for computer-based solutions. Deep learning-based object detection methods have shown promise in addressing such challenges recently. This approach typically demands a substantial amount of high-resolution, well-annotated training data, which is challenging to obtain in NDT. Consequently, it becomes difficult to detect low-resolution images and defects with varying positional sizes. This work proposes improvements based on the state-of-the-art YOLOv8 algorithm to enhance the accuracy and efficiency of defect detection in phased-array ultrasonic testing. The space-to-depth convolution (SPD-Conv) is imported to replace strided convolution, mitigating information loss during convolution operations and improving detection performance on low-resolution images. Additionally, this paper constructs and incorporates the bi-level routing and spatial attention module (BRSA) into the backbone, generating multiscale feature maps with richer details. In the neck section, the original structure is replaced by the asymptotic feature pyramid network (AFPN) to reduce model parameters and computational complexity. After testing on public datasets, in comparison to YOLOv8 (the baseline), this algorithm achieves high-quality detection of flat bottom holes (FBH) and aluminium blocks on the simulated dataset. More importantly, for the challenging-to-detect defect side-drilled holes (SDH), it achieves F1 scores (weighted average of precision and recall) of 82.50% and intersection over union (IOU) of 65.96%, representing an improvement of 17.56% and 0.43%. On the experimental dataset, the F1 score and IOU for FBH reach 75.68% (an increase of 9.01%) and 83.79%, respectively. Simultaneously, the proposed algorithm demonstrates robust performance in the presence of external noise, while maintaining exceptionally high computational efficiency and inference speed. These experimental results validate the high detection performance of the proposed intelligent defect detection algorithm for ultrasonic images, which contributes to the advancement of the smart industry.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    前列腺癌是男性中最常见和最致命的疾病之一,且其早期诊断可对治疗过程产生重大影响,预防死亡。由于它在早期没有明显的临床症状,很难诊断。此外,专家在分析磁共振图像方面的分歧也是一个重大挑战。近年来,各种研究表明,深度学习,尤其是卷积神经网络,已经成功地出现在机器视觉中(特别是在医学图像分析中)。在这项研究中,在多参数磁共振图像上使用了一种深度学习方法,研究了临床和病理数据对模型准确性的协同作用。数据是从德黑兰的Trita医院收集的,其中包括343例患者(在该过程中使用了数据增强和学习迁移方法).在设计的模型中,使用四个独立的ResNet50深度卷积网络分析了四种不同类型的图像,并将其提取的特征转移到完全连接的神经网络,并与临床和病理特征相结合。在没有临床和病理数据的模型中,最高准确率达到88%,但是通过添加这些数据,准确度提高到96%,临床和病理资料对诊断的准确性有显著影响。
    Prostate cancer is one of the most common and fatal diseases among men, and its early diagnosis can have a significant impact on the treatment process and prevent mortality. Since it does not have apparent clinical symptoms in the early stages, it is difficult to diagnose. In addition, the disagreement of experts in the analysis of magnetic resonance images is also a significant challenge. In recent years, various research has shown that deep learning, especially convolutional neural networks, has appeared successfully in machine vision (especially in medical image analysis). In this research, a deep learning approach was used on multi-parameter magnetic resonance images, and the synergistic effect of clinical and pathological data on the accuracy of the model was investigated. The data were collected from Trita Hospital in Tehran, which included 343 patients (data augmentation and learning transfer methods were used during the process). In the designed model, four different types of images are analyzed with four separate ResNet50 deep convolutional networks, and their extracted features are transferred to a fully connected neural network and combined with clinical and pathological features. In the model without clinical and pathological data, the maximum accuracy reached 88%, but by adding these data, the accuracy increased to 96%, which shows the significant impact of clinical and pathological data on the accuracy of diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:卷积神经网络(CNN)是用于解码脑电图(EEG)的最广泛使用的深度学习框架,因为它们能够从高维EEG数据中提取分层特征。传统上,CNN主要利用多通道原始EEG数据作为输入张量;然而,基于CNN的EEG解码的性能可以通过将相位信息与幅度信息合并来增强。
    方法:本研究介绍了一种名为希尔伯特变换(HT)和原始EEG网络(HiRENet)的新型CNN架构,其中包含原始和HTEEG作为输入。这种同时使用HT和原始EEG的目的是将相位信息与现有的幅度信息相结合。有可能更全面地反映大脑各个区域的功能连通性。HiRENet模型是使用两个CNN框架开发的:ShallowFBCSPNet和带残差块的CNN(ResCNN)。使用实验室制作的EEG数据库评估HiRENet模型的性能,以对人类情绪进行分类,比较三种输入模式:原始脑电图,HT脑电图,以及两种信号的组合。此外,计算复杂度进行了评估,以验证ResCNN设计的计算效率。
    结果:基于ResCNN的HiRENet模型实现了最高的分类精度,效价为86.03%,唤醒分类为84.01%,超越传统的CNN方法。考虑到计算效率,ResCNN在速度和推理时间方面表现出优于ShallowFBCSPNet的优势,尽管有较高的参数计数。
    结论:我们的实验结果表明,所提出的HiRENet可以潜在地用作新的选择,以提高基于深度学习的EEG解码问题的整体性能。
    OBJECTIVE: Convolutional neural networks (CNNs) are the most widely used deep-learning framework for decoding electroencephalograms (EEGs) due to their exceptional ability to extract hierarchical features from high-dimensional EEG data. Traditionally, CNNs have primarily utilized multi-channel raw EEG data as the input tensor; however, the performance of CNN-based EEG decoding may be enhanced by incorporating phase information alongside amplitude information.
    METHODS: This study introduces a novel CNN architecture called the Hilbert-transformed (HT) and raw EEG network (HiRENet), which incorporates both raw and HT EEG as inputs. This concurrent use of HT and raw EEG aims to integrate phase information with existing amplitude information, potentially offering a more comprehensive reflection of functional connectivity across various brain regions. The HiRENet model was developed using two CNN frameworks: ShallowFBCSPNet and a CNN with a residual block (ResCNN). The performance of the HiRENet model was assessed using a lab-made EEG database to classify human emotions, comparing three input modalities: raw EEG, HT EEG, and a combination of both signals. Additionally, the computational complexity was evaluated to validate the computational efficiency of the ResCNN design.
    RESULTS: The HiRENet model based on ResCNN achieved the highest classification accuracy, with 86.03% for valence and 84.01% for arousal classifications, surpassing traditional CNN methodologies. Considering computational efficiency, ResCNN demonstrated superiority over ShallowFBCSPNet in terms of speed and inference time, despite having a higher parameter count.
    CONCLUSIONS: Our experimental results showed that the proposed HiRENet can be potentially used as a new option to improve the overall performance for deep learning-based EEG decoding problems.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:结直肠癌和前列腺癌是全世界男性中最常见的癌症类型。为了诊断结直肠癌和前列腺癌,病理学家对穿刺活检样本进行组织学分析。此手动过程耗时且容易出错,导致观察者内部和观察者之间的高度变异性,影响诊断的可靠性。
    目的:本研究旨在通过使用活检样本的图像来开发一种用于诊断结直肠和前列腺肿瘤的自动计算机化系统,以减少与人类分析相关的时间和诊断错误率。
    方法:在本研究中,我们提出了一种卷积神经网络(CNN)模型,用于从活检样本的多光谱图像中分类结直肠和前列腺肿瘤。关键思想是删除卷积层的最后一块,并将每层的过滤器数量减半。
    结果:我们的结果表明了出色的性能,前列腺和结直肠数据集的平均测试准确率为99.8%和99.5%,分别。与预先训练的CNN和其他分类方法相比,该系统表现出优异的性能,因为它避免了预处理阶段,同时将单个CNN模型用于整个分类任务。总的来说,提出的CNN架构是全球表现最好的结肠直肠和前列腺肿瘤图像分类系统。
    结论:详细介绍了提出的CNN架构,并将其与用作特征提取器的先前训练的网络模型进行了比较。还将这些CNN与其他分类技术进行了比较。与预先训练的CNN和其他分类方法相反,提出的CNN产生了极好的结果。还研究了CNN的计算复杂性,结果表明,所提出的CNN比预先训练的网络更好地对图像进行分类,因为它不需要预处理。因此,总体分析认为,所提出的CNN架构是全球范围内用于对结直肠和前列腺肿瘤图像进行分类的性能最好的系统.
    BACKGROUND: Colorectal and prostate cancers are the most common types of cancer in men worldwide. To diagnose colorectal and prostate cancer, a pathologist performs a histological analysis on needle biopsy samples. This manual process is time-consuming and error-prone, resulting in high intra- and interobserver variability, which affects diagnosis reliability.
    OBJECTIVE: This study aims to develop an automatic computerized system for diagnosing colorectal and prostate tumors by using images of biopsy samples to reduce time and diagnosis error rates associated with human analysis.
    METHODS: In this study, we proposed a convolutional neural network (CNN) model for classifying colorectal and prostate tumors from multispectral images of biopsy samples. The key idea was to remove the last block of the convolutional layers and halve the number of filters per layer.
    RESULTS: Our results showed excellent performance, with an average test accuracy of 99.8% and 99.5% for the prostate and colorectal data sets, respectively. The system showed excellent performance when compared with pretrained CNNs and other classification methods, as it avoids the preprocessing phase while using a single CNN model for the whole classification task. Overall, the proposed CNN architecture was globally the best-performing system for classifying colorectal and prostate tumor images.
    CONCLUSIONS: The proposed CNN architecture was detailed and compared with previously trained network models used as feature extractors. These CNNs were also compared with other classification techniques. As opposed to pretrained CNNs and other classification approaches, the proposed CNN yielded excellent results. The computational complexity of the CNNs was also investigated, and it was shown that the proposed CNN is better at classifying images than pretrained networks because it does not require preprocessing. Thus, the overall analysis was that the proposed CNN architecture was globally the best-performing system for classifying colorectal and prostate tumor images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号