attention mechanism

注意机制
  • 文章类型: Journal Article
    在受设备限制的临床条件下,实现轻量级的皮肤病变分割至关重要,因为它有助于将模型集成到各种医疗设备中,从而提高运营效率。然而,模型的轻量化设计可能面临精度下降,特别是当处理复杂的图像,如皮肤病变图像与不规则区域,模糊的边界,和超大的边界。为了应对这些挑战,我们提出了一个有效的轻量级注意网络(ELANet)用于皮肤病变分割任务。在ELANet,两种不同的注意机制的双边残差模块(BRM)可以实现信息互补,这增强了对空间和通道维度特征的敏感性,分别,然后将多个BRM堆叠起来,对输入信息进行有效的特征提取。此外,该网络通过多尺度注意力融合(MAF)操作放置不同尺度的特征图来获取全局信息并提高分割精度。最后,我们评估了ELANet在三个公开可用数据集上的性能,ISIC2016、ISIC2017和ISIC2018,实验结果表明,我们的算法可以达到89.87%,81.85%,三个参数为0.459M的数据集上的mIoU的82.87%,这是一个很好的平衡之间的准确性和亮度,是优于许多现有的分割方法。
    In clinical conditions limited by equipment, attaining lightweight skin lesion segmentation is pivotal as it facilitates the integration of the model into diverse medical devices, thereby enhancing operational efficiency. However, the lightweight design of the model may face accuracy degradation, especially when dealing with complex images such as skin lesion images with irregular regions, blurred boundaries, and oversized boundaries. To address these challenges, we propose an efficient lightweight attention network (ELANet) for the skin lesion segmentation task. In ELANet, two different attention mechanisms of the bilateral residual module (BRM) can achieve complementary information, which enhances the sensitivity to features in spatial and channel dimensions, respectively, and then multiple BRMs are stacked for efficient feature extraction of the input information. In addition, the network acquires global information and improves segmentation accuracy by putting feature maps of different scales through multi-scale attention fusion (MAF) operations. Finally, we evaluate the performance of ELANet on three publicly available datasets, ISIC2016, ISIC2017, and ISIC2018, and the experimental results show that our algorithm can achieve 89.87%, 81.85%, and 82.87% of the mIoU on the three datasets with a parametric of 0.459 M, which is an excellent balance between accuracy and lightness and is superior to many existing segmentation methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    红外小目标检测技术在军事侦察等各个领域发挥着至关重要的作用,电力巡逻,医学诊断,和安全。深度学习的进步导致了卷积神经网络在目标分割中的成功。然而,由于小目标规模等挑战,微弱信号,红外图像中的强背景干扰,卷积神经网络在小目标分割任务中经常面临泄漏和误检测等问题。为了解决这个问题,提出了一种名为MST-UNet的增强型U-Net方法,该方法结合了多尺度特征分解和融合以及注意力机制。该方法涉及使用Haar小波变换而不是最大池化来在编码器中进行下采样,以最小化特征损失并提高特征利用率。此外,引入多尺度残差单元来提取不同尺度的上下文信息,改善感官领域和特征表达。在编码器结构中包括三重注意机制进一步增强了解码器的多维信息利用和特征恢复。在NUDT-SIRST数据集上的实验分析表明,该方法显著提高了目标轮廓精度和分割精度,实现80.09%和80.19%的IoU和nIoU值,分别。
    Infrared small target detection technology plays a crucial role in various fields such as military reconnaissance, power patrol, medical diagnosis, and security. The advancement of deep learning has led to the success of convolutional neural networks in target segmentation. However, due to challenges like small target scales, weak signals, and strong background interference in infrared images, convolutional neural networks often face issues like leakage and misdetection in small target segmentation tasks. To address this, an enhanced U-Net method called MST-UNet is proposed, the method combines multi-scale feature decomposition and fusion and attention mechanisms. The method involves using Haar wavelet transform instead of maximum pooling for downsampling in the encoder to minimize feature loss and enhance feature utilization. Additionally, a multi-scale residual unit is introduced to extract contextual information at different scales, improving sensory field and feature expression. The inclusion of a triple attention mechanism in the encoder structure further enhances multidimensional information utilization and feature recovery by the decoder. Experimental analysis on the NUDT-SIRST dataset demonstrates that the proposed method significantly improves target contour accuracy and segmentation precision, achieving IoU and nIoU values of 80.09% and 80.19%, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在工业4.0的背景下,轴承、作为机械的关键部件,在确保运行可靠性方面发挥着至关重要的作用。因此,检测他们的健康状况至关重要。现有的预测模型通常侧重于轴承寿命的点预测,缺乏量化不确定性的能力,在准确性方面还有改进的空间。为了准确预测轴承的长期剩余使用寿命(RUL),一种新颖的时间卷积网络模型,具有基于注意力机制的软阈值决策残差结构,用于量化轴承的寿命间隔,即TCN-AM-GPR,是提议的。首先,从轴承传感器信号构建时空图作为预测模型的输入。其次,建立了基于软阈值决策的残差结构,并具有自注意机制,以进一步抑制采集到的轴承寿命信号中的噪声。第三,提取的特征通过区间量化层得到轴承的RUL及其置信区间。所提出的方法已经使用PHM2012轴承数据集进行了验证,仿真实验结果对比表明,TCN-AM-GPR取得了最佳点预测评价指标,与TCN-GPR的第二好性能相比,R2提高了2.17%。同时,具有最佳区间预测综合评价指标,与TCN-GPR的第二好性能相比,MWP相对降低了16.73%。研究结果表明,TCN-AM-GPR能够保证点估计的准确性,在描述预测不确定性方面具有优越的优势和现实意义。
    In the context of Industry 4.0, bearings, as critical components of machinery, play a vital role in ensuring operational reliability. The detection of their health status is thus of paramount importance. Existing predictive models often focus on point predictions of bearing lifespan, lacking the ability to quantify uncertainty and having room for improvement in accuracy. To accurately predict the long-term remaining useful life (RUL) of bearings, a novel time convolutional network model with an attention mechanism-based soft thresholding decision residual structure for quantifying the lifespan interval of bearings, namely TCN-AM-GPR, is proposed. Firstly, a spatio-temporal graph is constructed from the bearing sensor signals as the input to the prediction model. Secondly, a residual structure based on a soft threshold decision with a self-attention mechanism is established to further suppress noise in the collected bearing lifespan signals. Thirdly, the extracted features pass through an interval quantization layer to obtain the RUL and its confidence interval of the bearings. The proposed methodology has been verified using the PHM2012 bearing dataset, and the comparison of simulation experiment results shows that TCN-AM-GPR achieved the best point prediction evaluation index, with a 2.17% improvement in R2 compared to the second-best performance from TCN-GPR. At the same time, it also has the best interval prediction comprehensive evaluation index, with a relative decrease of 16.73% in MWP compared to the second-best performance from TCN-GPR. The research results indicate that TCN-AM-GPR can ensure the accuracy of point estimates, while having superior advantages and practical significance in describing prediction uncertainty.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    铸件表面缺陷检测是基于机器视觉的重要自动化技术。本文提出了一种融合增强的注意力机制和高效的自架构轻量级YOLO(SLGA-YOLO),以克服现有目标检测算法的计算效率低下和缺陷检测精度低下的问题。我们使用SlimNeck模块来改进颈部模块并减少冗余信息干扰。简化注意力模块(SimAM)和大型可分离内核注意力(LSKA)融合的集成加强了注意力机制,提高检测性能,同时显著降低计算复杂度和内存使用。为了增强模型特征提取的泛化能力,我们用自行设计的GhostConvML(GCML)模块替换了部分基本卷积块,基于添加p2检测。我们还构造了Alpha-EIoU损失函数来加速模型的收敛。实验结果表明,在铸件表面缺陷数据集中,增强算法的平均检测精度(mAP@0.5)提高了3%,平均检测精度(mAP@0.5:0.95)提高了1.6%。
    Castings\' surface-defect detection is a crucial machine vision-based automation technology. This paper proposes a fusion-enhanced attention mechanism and efficient self-architecture lightweight YOLO (SLGA-YOLO) to overcome the existing target detection algorithms\' poor computational efficiency and low defect-detection accuracy. We used the SlimNeck module to improve the neck module and reduce redundant information interference. The integration of simplified attention module (SimAM) and Large Separable Kernel Attention (LSKA) fusion strengthens the attention mechanism, improving the detection performance, while significantly reducing computational complexity and memory usage. To enhance the generalization ability of the model\'s feature extraction, we replaced part of the basic convolutional blocks with the self-designed GhostConvML (GCML) module, based on the addition of p2 detection. We also constructed the Alpha-EIoU loss function to accelerate model convergence. The experimental results demonstrate that the enhanced algorithm increases the average detection accuracy (mAP@0.5) by 3% and the average detection accuracy (mAP@0.5:0.95) by 1.6% in the castings\' surface defects dataset.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    解决检测陶瓷盘表面缺陷的挑战,例如难以检测小缺陷,缺陷尺寸的变化,和不准确的缺陷定位,我们提出了一种增强的YOLOv5s算法。首先,我们改进了YOLOv5s模型的锚架结构,以增强其泛化能力,对不同尺寸的物体进行鲁棒的缺陷检测。其次,我们引入ECA注意机制来提高模型检测小目标的准确性。在相同的实验条件下,我们增强的YOLOV5S算法显示了显著的改进,精确地,F1得分,mAP值增加3.1%,3%,分别为4.5%和4.5%。此外,检测裂纹的准确性,损坏,炉渣,点缺陷增加0.2%,4.7%,5.4%,和分别为1.9%。值得注意的是,检测速度从232帧/s提高到256帧/s。与其他算法的比较分析揭示了优于YOLOv3和YOLOv4模型的性能,展示了在识别小目标缺陷和实现实时检测方面的增强能力。
    Addressing the challenges in detecting surface defects on ceramic disks, such as difficulty in detecting small defects, variations in defect sizes, and inaccurate defect localization, we propose an enhanced YOLOv5s algorithm. Firstly, we improve the anchor frame structure of the YOLOv5s model to enhance its generalization ability, enabling robust defect detection for objects of varying sizes. Secondly, we introduce the ECA attention mechanism to improve the model\'s accuracy in detecting small targets. Under identical experimental conditions, our enhanced YOLOv5s algorithm demonstrates significant improvements, with precision, F1 scores, and mAP values increasing by 3.1 %, 3 %, and 4.5 % respectively. Moreover, the accuracy in detecting crack, damage, slag, and spot defects increases by 0.2 %, 4.7 %, 5.4 %, and 1.9 % respectively. Notably, the detection speed improves from 232 frames/s to 256 frames/s. Comparative analysis with other algorithms reveals superior performance over YOLOv3 and YOLOv4 models, showcasing enhanced capability in identifying small target defects and achieving real-time detection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    蛋白质-肽相互作用(PPepIs)对于理解细胞功能至关重要。这可以促进新药的设计。作为形成PPepI的重要组成部分,蛋白-肽结合位点是理解PPepIs机制的基础.因此,准确识别蛋白-肽结合位点成为一项关键任务。传统的研究这些结合位点的实验方法费时费力,并且已经发明了一些计算工具来补充它。然而,由于需要配体信息,这些计算工具在通用性或准确性方面有限制,复杂的特征构造,或者他们对基于氨基酸残基的建模的依赖。为了解决这些计算算法的缺点,在这项工作中,我们描述了一个基于几何注意力的肽结合位点识别(GAPS)网络。所提出的模型利用几何特征工程来构建原子表示,并结合了多种注意力机制来更新相关的生物特征。此外,迁移学习策略是为了利用蛋白质-蛋白质结合位点信息来增强蛋白质-肽结合位点识别能力,考虑到蛋白质和肽之间的共同结构和生物学偏见。因此,GAPS在此任务中展示了最先进的性能和出色的鲁棒性。此外,我们的模型在几个扩展的实验中表现出卓越的性能,包括预测apo蛋白-肽,蛋白质环肽和AlphaFold预测的蛋白质肽结合位点。这些结果证实了GAPS模型是一个强大的,多才多艺,适用于多种结合位点预测的稳定方法。
    Protein-peptide interactions (PPepIs) are vital to understanding cellular functions, which can facilitate the design of novel drugs. As an essential component in forming a PPepI, protein-peptide binding sites are the basis for understanding the mechanisms involved in PPepIs. Therefore, accurately identifying protein-peptide binding sites becomes a critical task. The traditional experimental methods for researching these binding sites are labor-intensive and time-consuming, and some computational tools have been invented to supplement it. However, these computational tools have limitations in generality or accuracy due to the need for ligand information, complex feature construction, or their reliance on modeling based on amino acid residues. To deal with the drawbacks of these computational algorithms, we describe a geometric attention-based network for peptide binding site identification (GAPS) in this work. The proposed model utilizes geometric feature engineering to construct atom representations and incorporates multiple attention mechanisms to update relevant biological features. In addition, the transfer learning strategy is implemented for leveraging the protein-protein binding sites information to enhance the protein-peptide binding sites recognition capability, taking into account the common structure and biological bias between proteins and peptides. Consequently, GAPS demonstrates the state-of-the-art performance and excellent robustness in this task. Moreover, our model exhibits exceptional performance across several extended experiments including predicting the apo protein-peptide, protein-cyclic peptide and the AlphaFold-predicted protein-peptide binding sites. These results confirm that the GAPS model is a powerful, versatile, stable method suitable for diverse binding site predictions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在全球化的浪潮中,文化融合现象激增,突出强调跨文化交际中固有的挑战。为了应对这些挑战,当代研究已将重点转移到人机对话上。尤其是在人机对话的教育范式中,分析用户对话中的情感识别尤为重要。准确识别和理解用户的情感倾向以及人机交互和游戏的效率和体验。本研究旨在提高人机对话中的语言情感识别能力。它提出了一种基于来自变压器(BERT)的双向编码器表示的混合模型(BCBA),卷积神经网络(CNN),双向门控递归单位(BiGRU),注意机制。该模型利用BERT模型从文本中提取语义和句法特征。同时,它集成了CNN和BiGRU网络,以更深入地研究文本特征,增强模型在细致入微的情感识别方面的熟练程度。此外,通过引入注意力机制,该模型可以根据单词的情绪倾向为单词分配不同的权重。这使其能够优先考虑具有可辨别的情绪倾向的单词,以进行更精确的情绪分析。通过在两个数据集上的实验验证,BCBA模型在情感识别和分类任务中取得了显著的效果。该模型的准确性和F1得分都有了显著提高,平均准确率为0.84,平均F1评分为0.8。混淆矩阵分析揭示了该模型的最小分类错误率。此外,随着迭代次数的增加,模型的召回率稳定在约0.7。这一成就展示了该模型在语义理解和情感分析方面的强大功能,并展示了其在跨文化背景下处理语言表达中的情感特征方面的优势。本研究提出的BCBA模型为人机对话中的情感识别提供了有效的技术支持,这对于构建更加智能、人性化的人机交互系统具有重要意义。在未来,我们将继续优化模型的结构,提高其处理复杂情绪和跨语言情绪识别的能力,并探索将该模型应用于更多的实际场景,进一步促进人机对话技术的发展和应用。
    Amid the wave of globalization, the phenomenon of cultural amalgamation has surged in frequency, bringing to the fore the heightened prominence of challenges inherent in cross-cultural communication. To address these challenges, contemporary research has shifted its focus to human-computer dialogue. Especially in the educational paradigm of human-computer dialogue, analysing emotion recognition in user dialogues is particularly important. Accurately identify and understand users\' emotional tendencies and the efficiency and experience of human-computer interaction and play. This study aims to improve the capability of language emotion recognition in human-computer dialogue. It proposes a hybrid model (BCBA) based on bidirectional encoder representations from transformers (BERT), convolutional neural networks (CNN), bidirectional gated recurrent units (BiGRU), and the attention mechanism. This model leverages the BERT model to extract semantic and syntactic features from the text. Simultaneously, it integrates CNN and BiGRU networks to delve deeper into textual features, enhancing the model\'s proficiency in nuanced sentiment recognition. Furthermore, by introducing the attention mechanism, the model can assign different weights to words based on their emotional tendencies. This enables it to prioritize words with discernible emotional inclinations for more precise sentiment analysis. The BCBA model has achieved remarkable results in emotion recognition and classification tasks through experimental validation on two datasets. The model has significantly improved both accuracy and F1 scores, with an average accuracy of 0.84 and an average F1 score of 0.8. The confusion matrix analysis reveals a minimal classification error rate for this model. Additionally, as the number of iterations increases, the model\'s recall rate stabilizes at approximately 0.7. This accomplishment demonstrates the model\'s robust capabilities in semantic understanding and sentiment analysis and showcases its advantages in handling emotional characteristics in language expressions within a cross-cultural context. The BCBA model proposed in this study provides effective technical support for emotion recognition in human-computer dialogue, which is of great significance for building more intelligent and user-friendly human-computer interaction systems. In the future, we will continue to optimize the model\'s structure, improve its capability in handling complex emotions and cross-lingual emotion recognition, and explore applying the model to more practical scenarios to further promote the development and application of human-computer dialogue technology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着图像的多样性和数量不断增长,对高效细粒度图像检索的需求已经在许多领域激增。然而,当前基于深度学习的细粒度图像检索方法通常只集中在顶层特征上,忽略中间层携带的相关信息,即使这些信息包含更细粒度的识别内容。此外,这些方法通常在哈希代码映射期间采用统一的加权策略,冒着失去关键区域映射的风险-对细粒度检索任务的不可逆损害。为解决上述问题,我们提出了一种利用特征融合和哈希映射技术进行细粒度图像检索的新方法。我们的方法利用了多级特征级联,不仅强调顶层图像特征,而且强调中间层图像特征,并在每个级别集成了特征融合模块,以增强区分信息的提取。此外,我们引入了一个代理自我关注架构,标志着它在这方面的第一个应用,这引导模型优先考虑远程功能,进一步避免了映射的关键区域的丢失。最后,我们提出的模型显著优于现有的最新技术,将12位数据集的检索精度平均提高了40%,22%的24位数据集,16%的32位数据集,以及五个公开可用的细粒度数据集的48位数据集的11%。我们还通过另外五个数据集和统计显著性检验来验证我们提出的方法的泛化能力和性能稳定性。我们的代码可以从https://github.com/BJFU-CS2012/MuiltNet下载。git.
    As the diversity and volume of images continue to grow, the demand for efficient fine-grained image retrieval has surged across numerous fields. However, the current deep learning-based approaches to fine-grained image retrieval often concentrate solely on the top-layer features, neglecting the relevant information carried in the middle layer, even though these information contains more fine-grained identification content. Moreover, these methods typically employ a uniform weighting strategy during hash code mapping, risking the loss of critical region mapping-an irreversible detriment to fine-grained retrieval tasks. To address the above problems, we propose a novel method for fine-grained image retrieval that leverage feature fusion and hash mapping techniques. Our approach harnesses a multi-level feature cascade, emphasizing not just top-layer but also intermediate-layer image features, and integrates a feature fusion module at each level to enhance the extraction of discriminative information. In addition, we introduce an agent self-attention architecture, marking its first application in this context, which steers the model to prioritize on long-range features, further avoiding the loss of critical regions of the mapping. Finally, our proposed model significantly outperforms existing state-of-the-art, improving the retrieval accuracy by an average of 40% for the 12-bit dataset, 22% for the 24-bit dataset, 16% for the 32-bit dataset, and 11% for the 48-bit dataset across five publicly available fine-grained datasets. We also validate the generalization ability and performance stability of our proposed method by another five datasets and statistical significance tests. Our code can be downloaded from https://github.com/BJFU-CS2012/MuiltNet.git.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    使用深度学习方法从3D医学图像(如磁共振成像(MRI)和计算机断层扫描(CT)扫描)中自动分割多个器官和肿瘤,可以帮助诊断和治疗癌症。然而,器官经常重叠和复杂的连接,其特点是广泛的解剖变异和低对比度。此外,肿瘤形状的多样性,location,和外观,再加上背景体素的优势,使得三维医学图像的精确分割变得困难。在本文中,提出了一种新颖的3D大核(LK)注意模块来解决这些问题,以实现准确的多器官分割和肿瘤分割。在提出的LK注意力模块中结合了生物启发的自我注意力和卷积的优点,包括本地上下文信息,远程依赖,和频道适应。该模块还分解LK卷积以优化计算成本,并且可以容易地并入诸如U-Net的CNN中。综合消融实验证明了卷积分解的可行性,并探索了最有效和最有效的网络设计。其中,在CT-ORG和BraTS2020数据集上评估了最佳的中型3DLK基于注意力的U-Net网络,与前卫CNN和基于Transformer的医学图像分割方法相比,实现了最先进的分割性能。对由于所提出的3DLK注意力模块而产生的性能改进进行了统计验证。
    Automated segmentation of multiple organs and tumors from 3D medical images such as magnetic resonance imaging (MRI) and computed tomography (CT) scans using deep learning methods can aid in diagnosing and treating cancer. However, organs often overlap and are complexly connected, characterized by extensive anatomical variation and low contrast. In addition, the diversity of tumor shape, location, and appearance, coupled with the dominance of background voxels, makes accurate 3D medical image segmentation difficult. In this paper, a novel 3D large-kernel (LK) attention module is proposed to address these problems to achieve accurate multi-organ segmentation and tumor segmentation. The advantages of biologically inspired self-attention and convolution are combined in the proposed LK attention module, including local contextual information, long-range dependencies, and channel adaptation. The module also decomposes the LK convolution to optimize the computational cost and can be easily incorporated into CNNs such as U-Net. Comprehensive ablation experiments demonstrated the feasibility of convolutional decomposition and explored the most efficient and effective network design. Among them, the best Mid-type 3D LK attention-based U-Net network was evaluated on CT-ORG and BraTS 2020 datasets, achieving state-of-the-art segmentation performance when compared to avant-garde CNN and Transformer-based methods for medical image segmentation. The performance improvement due to the proposed 3D LK attention module was statistically validated.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    器官分割是各种医学成像应用中的关键任务。已经开发了许多深度学习模型来做到这一点,但是速度很慢,需要大量的计算资源。为了解决这个问题,使用注意机制,可以在医学图像中定位重要的感兴趣对象,允许模型准确地分割它们,即使有噪声或伪影。通过关注特定的解剖区域,该模型在分割方面变得更好。医学图像在解剖信息形式上具有独特的特征,这使得它们与自然图像不同。不幸的是,大多数深度学习方法要么忽略这些信息,要么没有有效和明确地使用它。将自然智能与人工智能相结合,被称为混合智能,在医学图像分割中显示出了有希望的结果,使模型更健壮,并能够在具有挑战性的情况下表现良好。在本文中,我们提出了几种方法和模型,通过非深度学习方法在医学图像中找到基于深度学习的分割区域。我们开发了这些模型,并使用混合智能概念对其进行了训练。为了评估他们的表现,我们在独特的测试数据上测试了模型,并分析了包括假阴性商和假阳性商在内的指标.我们的发现表明,可以明确地学习对象形状和布局变化,以创建适用于每个解剖对象的计算模型。这项工作为医学图像分割和分析的进步开辟了新的可能性。
    Organ segmentation is a crucial task in various medical imaging applications. Many deep learning models have been developed to do this, but they are slow and require a lot of computational resources. To solve this problem, attention mechanisms are used which can locate important objects of interest within medical images, allowing the model to segment them accurately even when there is noise or artifact. By paying attention to specific anatomical regions, the model becomes better at segmentation. Medical images have unique features in the form of anatomical information, which makes them different from natural images. Unfortunately, most deep learning methods either ignore this information or do not use it effectively and explicitly. Combined natural intelligence with artificial intelligence, known as hybrid intelligence, has shown promising results in medical image segmentation, making models more robust and able to perform well in challenging situations. In this paper, we propose several methods and models to find attention regions in medical images for deep learning-based segmentation via non-deep-learning methods. We developed these models and trained them using hybrid intelligence concepts. To evaluate their performance, we tested the models on unique test data and analyzed metrics including false negatives quotient and false positives quotient. Our findings demonstrate that object shape and layout variations can be explicitly learned to create computational models that are suitable for each anatomic object. This work opens new possibilities for advancements in medical image segmentation and analysis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号