MobileNetV2

MobileNetV2
  • 文章类型: Journal Article
    本文提出了一种使用人工智能自动诊断胃癌的创新框架。所提出的方法利用称为MobileNetV2的定制深度学习模型,该模型使用Pelican优化算法(DPOA)的动态变体进行优化。通过结合这些先进的技术,当应用于胃镜图像数据集时,获得高度准确的结果是可行的。为了根据基准评估模型的性能,其数据分为训练集(80%)和测试集(20%)。MobileNetV2/DPOA模型显示出高达97.73%的准确度,精密度为97.88%,特异性97.72%,灵敏度为96.35%,马修斯相关系数(MCC)为96.58%,F1评分为98.41%。这些结果超过了其他著名模型获得的结果,如卷积神经网络(CNN),基于掩模区域的卷积神经网络(掩模R-CNN),U-Net,深度堆叠稀疏编码器神经网络(SANN),和DeepLabv3+,就大多数量化指标而言。尽管结果很有希望,重要的是要注意,需要进一步的研究。具体来说,更大和更多样化的数据集以及详尽的临床验证是必要的,以验证所提出的方法的有效性。通过在胃癌检测中实施这种创新方法,可以提高诊断的速度和准确性,导致改善患者护理和更好地分配医疗资源。
    This paper presents an innovative framework for the automated diagnosis of gastric cancer using artificial intelligence. The proposed approach utilizes a customized deep learning model called MobileNetV2, which is optimized using a Dynamic variant of the Pelican Optimization Algorithm (DPOA). By combining these advanced techniques, it is feasible to achieve highly accurate results when applied to a dataset of endoscopic gastric images. To evaluate the performance of the model based on the benchmark, its data is divided into training (80 %) and testing (20 %) sets. The MobileNetV2/DPOA model demonstrated an impressive accuracy of 97.73 %, precision of 97.88 %, specificity of 97.72 %, sensitivity of 96.35 %, Matthews Correlation Coefficient (MCC) of 96.58 %, and F1-score of 98.41 %. These results surpassed those obtained by other well-known models, such as Convolutional Neural Networks (CNN), Mask Region-Based Convolutional Neural Networks (Mask R-CNN), U-Net, Deep Stacked Sparse Autoencoder Neural Networks (SANNs), and DeepLab v3+, in terms of most quantitative metrics. Despite the promising outcomes, it is important to note that further research is needed. Specifically, larger and more diverse datasets as well as exhaustive clinical validation are necessary to validate the effectiveness of the proposed method. By implementing this innovative approach in the detection of gastric cancer, it is possible to enhance the speed and accuracy of diagnosis, leading to improved patient care and better allocation of healthcare resources.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    奶牛疾病是人们关注的主要来源。在早期发现的动物中的一些疾病可以在仍然可以治疗的同时进行治疗。如果肿块性皮肤病(LSD)没有得到适当治疗,这可能会给农场畜牧业带来巨大的经济损失。像牛这样的动物会严重影响这种疾病。牛奶产量的减少,生育率降低,生长迟缓,流产,偶尔死亡都是这种疾病对奶牛的有害影响。在过去的三个月里,LSD已经影响了孟加拉国近50个地区的数千头牛,导致养牛农民担心他们的生计。尽管这种病毒具有很强的传染性,在接受了几个月的适当护理后,受影响的牛可以治愈。这项研究的目的是使用各种深度学习和机器学习模型来确定奶牛是否患有块状疾病。为了完成这项工作,提出了一种基于卷积神经网络(CNN)的新型结构来检测疾病。已使用图像预处理和分割技术确定了块状疾病影响区域。在提取了众多特征之后,我们提出的模型已经过评估,可以对LSD进行分类。四个CNN模型,DenseNet,MobileNetV2,Xception,和InceptionResNetV2用于对框架进行分类,并计算评估指标以确定分类器的工作情况。MobileNetV2通过将结果与最近发表的相关作品进行比较,能够实现96%的分类准确率和98%的AUC评分,这看起来既好又有希望。
    Cow diseases are a major source of concern for people. Some diseases in animals that are discovered in their early stages can be treated while they are still treatable. If lumpy skin disease (LSD) is not properly treated, it can result in significant financial losses for the farm animal industry. Animals like cows that sign this disease have their skin seriously affected. A reduction in milk production, reduced fertility, growth retardation, miscarriage, and occasionally death are all detrimental effects of this disease in cows. Over the past three months, LSD has affected thousands of cattle in nearly fifty districts across Bangladesh, causing cattle farmers to worry about their livelihood. Although the virus is very contagious, after receiving the right care for a few months, the affected cattle can be cured. The goal of this study was to use various deep learning and machine learning models to determine whether or not cows had lumpy disease. To accomplish this work, a Convolution neural network (CNN) based novel architecture is proposed for detecting the illness. The lumpy disease-affected area has been identified using image preprocessing and segmentation techniques. After the extraction of numerous features, our proposed model has been evaluated to classify LSD. Four CNN models, DenseNet, MobileNetV2, Xception, and InceptionResNetV2 were used to classify the framework, and evaluation metrics were computed to determine how well the classifiers worked. MobileNetV2 has been able to achieve 96% classification accuracy and an AUC score of 98% by comparing results with recently published relevant works, which seems both good and promising.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    机器学习和计算机视觉已被证明是农民简化资源利用以实现更可持续和高效农业生产的宝贵工具。这些技术在过去已经应用于草莓栽培,但成效有限。在过去的工作基础上,在这项研究中,两组单独的草莓图像,以及它们相关的疾病,被收集并进行大小调整和扩大。随后,一个由九个类组成的组合数据集被用来微调三个不同的预训练模型:视觉变换器(ViT),MobileNetV2和ResNet18。要解决数据集中不平衡的类分布,每个班级都被分配了权重,以确保在训练过程中产生几乎相等的影响.为了提高成果,通过删除背景生成新图像,降低噪音,翻转它们。ViT的表演,选择后比较了MobileNetV2和ResNet18。特定于任务的自定义应用于所有三种算法,并对他们的表现进行了评估。在整个实验过程中,没有一层是冷冻的,确保所有层在训练期间保持活跃。注意力被纳入MobileNetV2和ResNet18的前五层和后五层,同时修改了ViT的架构。结果表明,准确率为98.4%,98.1%,ViT为97.9%,分别为MobileNetV2和ResNet18。尽管数据不平衡,精度,它表示正确识别的积极实例在所有预测的积极实例中的比例,用ViT接近99%。MobileNetV2和ResNet18显示了类似的结果。总的来说,分析表明,视觉变压器模型在草莓成熟度和病害分类方面表现出优异的性能。在ResNet18和MobileNet18的早期层中包含关注头,以及ViT中固有的关注机制,提高了图像识别的准确性。这些发现为农民提供了仅通过被动摄像头监控来增强草莓种植的潜力,促进人民的健康和福祉。
    Machine learning and computer vision have proven to be valuable tools for farmers to streamline their resource utilization to lead to more sustainable and efficient agricultural production. These techniques have been applied to strawberry cultivation in the past with limited success. To build on this past work, in this study, two separate sets of strawberry images, along with their associated diseases, were collected and subjected to resizing and augmentation. Subsequently, a combined dataset consisting of nine classes was utilized to fine-tune three distinct pretrained models: vision transformer (ViT), MobileNetV2, and ResNet18. To address the imbalanced class distribution in the dataset, each class was assigned weights to ensure nearly equal impact during the training process. To enhance the outcomes, new images were generated by removing backgrounds, reducing noise, and flipping them. The performances of ViT, MobileNetV2, and ResNet18 were compared after being selected. Customization specific to the task was applied to all three algorithms, and their performances were assessed. Throughout this experiment, none of the layers were frozen, ensuring all layers remained active during training. Attention heads were incorporated into the first five and last five layers of MobileNetV2 and ResNet18, while the architecture of ViT was modified. The results indicated accuracy factors of 98.4%, 98.1%, and 97.9% for ViT, MobileNetV2, and ResNet18, respectively. Despite the data being imbalanced, the precision, which indicates the proportion of correctly identified positive instances among all predicted positive instances, approached nearly 99% with the ViT. MobileNetV2 and ResNet18 demonstrated similar results. Overall, the analysis revealed that the vision transformer model exhibited superior performance in strawberry ripeness and disease classification. The inclusion of attention heads in the early layers of ResNet18 and MobileNet18, along with the inherent attention mechanism in ViT, improved the accuracy of image identification. These findings offer the potential for farmers to enhance strawberry cultivation through passive camera monitoring alone, promoting the health and well-being of the population.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    及时准确地早期发现植物叶部病害对于保障农作物生产力和保障粮食安全至关重要。在他们的生命周期中,植物的叶子由于细菌等多种因素而患病,真菌,天气条件,等。在这项工作中,作者提出了一种模型,该模型使用改进的VisionTransformer和ResNet9模型,使用新颖的分层残差视觉变换器来帮助早期检测叶片疾病。所提出的模型可以通过减少可训练参数的数量以更少的计算来提取更有意义和区分的细节。所提出的方法在本地作物数据集上进行了评估,PlantVillage数据集,和扩展植物村数据集,具有13、38和51种不同的叶病类别。所提出的模型使用改进的视觉变换器的最佳轨迹参数进行训练,并使用ResNet9对特征进行分类。对上述数据集的各个方面进行了性能评估,结果表明,所提出的模型优于其他模型,例如InceptionV3,MobileNetV2和ResNet50。
    Early detection of plant leaf diseases accurately and promptly is very crucial for safeguarding agricultural crop productivity and ensuring food security. During their life cycle, plant leaves get diseased because of multiple factors like bacteria, fungi, weather conditions, etc. In this work, the authors propose a model that aids in the early detection of leaf diseases using a novel hierarchical residual vision transformer using improved Vision Transformer and ResNet9 models. The proposed model can extract more meaningful and discriminating details by reducing the number of trainable parameters with a smaller number of computations. The proposed method is evaluated on the Local Crop dataset, Plant Village dataset, and Extended Plant Village Dataset with 13, 38, and 51 different leaf disease classes. The proposed model is trained using the best trail parameters of Improved Vision Transformer and classified the features using ResNet 9. Performance evaluation is carried out on a wide aspects over the aforementioned datasets and results revealed that the proposed model outperforms other models such as InceptionV3, MobileNetV2, and ResNet50.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:子宫肌瘤(UF)是女性最常见的肿瘤,对并发症构成巨大威胁,比如流产。预后的准确性也可能受到医生经验不足和疲劳的影响,强调需要自动分类的方式,可以分析UF从一个巨大的各种各样的图像。
    方法:已经提出了一种混合模型,该模型将MobileNetV2社区和深度卷积生成对抗网络(DCGAN)结合为医疗从业者找出UF并评估其特征的有用资源。UF的实时自动分类可以帮助诊断情况并最大程度地减少主观错误。DCGAN科学用于卓越的统计增强,以创建一流的UF图像,将其标记为UF和非子宫肌瘤(NUF)类别。然后,MobileNetV2模型完全基于此数据对照片进行精确分类。
    结果:混合模型的整体性能与不同模型形成对比。混合模型实现了40帧每秒(FPS)的实时分类速度,准确率为97.45%,F1等级为0.9741。
    结论:通过使用这种深度学习混合方法,针对目前子宫肌瘤分类方法的不足。
    OBJECTIVE: Uterine fibroids (UF) are the most frequent tumors in ladies and can pose an enormous threat to complications, such as miscarriage. The accuracy of prognosis may also be affected by way of doctor inexperience and fatigue, underscoring the want for automatic classification fashions that can analyze UF from a giant wide variety of images.
    METHODS: A hybrid model has been proposed that combines the MobileNetV2 community and deep convolutional generative adversarial networks (DCGAN) into useful resources for medical practitioners in figuring out UF and evaluating its characteristics. Real-time automated classification of UF can aid in diagnosing the circumstance and minimizing subjective errors. The DCGAN science is utilized for superior statistics augmentation to create first-rate UF images, which are labeled into UF and non-uterine-fibroid (NUF) classes. The MobileNetV2 model then precisely classifies the photos based totally on this data.
    RESULTS: The overall performance of the hybrid model contrasts with different models. The hybrid model achieves a real-time classification velocity of 40 frames per second (FPS), an accuracy of 97.45%, and an F1 rating of 0.9741.
    CONCLUSIONS: By using this deep learning hybrid approach, we address the shortcomings of the current classification methods of uterine fibroid.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本文针对现有深度卷积神经网络中参数数量大、轻量级神经网络MobileNetV2容易丢失特征信息等缺点,提出了一种针对MobileNetV2神经网络(I-MobileNetV2)的改进策略,实时性能差,面部情感识别任务的准确率较低。该网络继承了MobilenetV2深度分离卷积的特点,这意味着减少了计算负荷,同时保持了轻量级的配置文件。它利用反向融合机制来保留负面特征,这使得信息不太可能丢失。SELU激活函数用于替换RELU6激活函数以避免梯度消失。同时,为了提高特征识别能力,信道注意机制(挤压和激励网络(SE-Net))被集成到MobilenetV2网络中。在FER2013和CK+的人脸表情数据上进行的实验表明,该网络模型的人脸表情识别准确率分别为68.62%和95.96%,对MobileNetV2模型的改进分别为0.72%和6.14%,参数计数减少了83.8%。这些结果通过实证验证了对网络模型所做改进的有效性。
    This paper proposes an improved strategy for the MobileNetV2 neural network(I-MobileNetV2) in response to problems such as large parameter quantities in existing deep convolutional neural networks and the shortcomings of the lightweight neural network MobileNetV2 such as easy loss of feature information, poor real-time performance, and low accuracy rate in facial emotion recognition tasks. The network inherits the characteristics of MobilenetV2 depthwise separated convolution, signifying a reduction in computational load while maintaining a lightweight profile. It utilizes a reverse fusion mechanism to retain negative features, which makes the information less likely to be lost. The SELU activation function is used to replace the RELU6 activation function to avoid gradient vanishing. Meanwhile, to improve the feature recognition capability, the channel attention mechanism (Squeeze-and-Excitation Networks (SE-Net)) is integrated into the MobilenetV2 network. Experiments conducted on the facial expression datasets FER2013 and CK + showed that the proposed network model achieved facial expression recognition accuracies of 68.62% and 95.96%, improving upon the MobileNetV2 model by 0.72% and 6.14% respectively, and the parameter count decreased by 83.8%. These results empirically verify the effectiveness of the improvements made to the network model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:现在已知越来越多的遗传和代谢异常会导致癌症,这通常是致命的。任何身体部位都可能被癌细胞感染,这可能是致命的。皮肤癌是最常见的癌症之一,它的流行率在全球范围内上升。鳞状细胞癌和基底细胞癌,以及黑色素瘤,这是临床上的侵略性,导致大多数死亡,是皮肤癌的主要亚型。因此,皮肤癌的筛查至关重要。
    方法:快速准确检测皮肤癌的最佳方法是使用深度学习技术。在这项研究中,像MobileNetv2和Densenet这样的深度学习技术将用于检测或识别两种主要的恶性肿瘤和良性肿瘤。对于这项研究,考虑了HAM10000数据集。该数据集由10,000张皮肤病变图像组成,该疾病包括非黑色素细胞和黑色素细胞肿瘤。这两种技巧可用于检测恶性和良性。将所有这些方法进行比较,然后可以从它们的性能推断结果。
    结果:经过模型评估,MobileNetV2的准确率为85%,定制CNN的准确率为95%。已经使用Python框架开发了一个Web应用程序,该框架提供了一个具有最佳训练模型的图形用户界面。图形用户界面允许用户输入患者细节并上传病变图像。将使用适当的训练模型对图像进行分类,该训练模型可以预测上传的图像是癌性的还是非癌性的。此Web应用程序还显示受影响的癌症的百分比。
    结论:根据两种技术之间的比较,定制的CNN为检测黑色素瘤提供了更高的准确性。
    BACKGROUND: More and more genetic and metabolic abnormalities are now known to cause cancer, which is typically deadly. Any bodily part may become infected by cancerous cells, which can be fatal. Skin cancer is one of the most prevalent types of cancer, and its prevalence is rising across the globe. Squamous and basal cell carcinomas, as well as melanoma, which is clinically aggressive and causes the majority of deaths, are the primary subtypes of skin cancer. Screening for skin cancer is therefore essential.
    METHODS: The best way to quickly and precisely detect skin cancer is by using deep learning techniques. In this research deep learning techniques like MobileNetv2 and Dense net will be used for detecting or identifying two main kinds of tumors malignant and benign. For this research HAM10000 dataset is considered. This dataset consists of 10,000 skin lesion images and the disease comprises nonmelanocytic and melanocytic tumors. These two techniques can be used for detecting the malignant and benign. All these methods are compared and then a result can be inferred from their performance.
    RESULTS: After the model evaluation, the accuracy for the MobileNetV2 was 85% and customized CNN was 95%. A web application has been developed with the Python framework that provides a graphical user interface with the best-trained model. The graphical user interface allows the user to enter the patient details and upload the lesion image. The image will be classified with the appropriate trained model which can predict whether the uploaded image is cancerous or non-cancerous. This web application also displays the percentage of cancer affected.
    CONCLUSIONS: As per the comparisons between the two techniques customized CNN gives higher accuracy for the detection of melanoma.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    识别表面类型的能力对于室内和室外移动机器人都至关重要。了解表面类型可以帮助室内移动机器人更安全地移动并相应地调整其移动。然而,识别表面特征是具有挑战性的,因为相似的平面可能看起来大不相同;例如,地毯有各种类型和颜色。为了解决基于视觉的表面分类中的这种固有不确定性,这项研究首先产生了一个新的,由2,081张表面图像组成的独特数据集(地毯,瓷砖,和木材)在不同的室内环境中捕获。其次,预先训练的最先进的深度学习模型,即InceptionV3、VGG16、VGG19、ResNet50、Xception、利用InceptionResNetV2和MobileNetV2来识别表面类型。此外,提出了一种轻量级的MobileNetV2修改模型用于表面分类。所提出的模型的总参数比原始MobileNetV2模型少大约四倍,将训练的模型权重的大小从42MB减少到11MB。因此,该模型可用于计算能力有限的机器人系统和嵌入式系统。最后,几个优化器,如SGD,RMSProp,亚当,Adadelta,Adamax,Adagrad,还有Nadam,用于区分最有效的网络。实验结果表明,所提出的模型优于所有其他应用的方法和现有的方法在文献中达到99.52%的准确率和平均得分99.66%的精度。召回,和F1得分。除此之外,在由办公室等各种室内环境组成的11种场景中,在移动机器人上实时测试了所提出的轻量级模型,走廊,和家园,导致99.25%的准确度。最后,根据模型加载时间和处理时间对每个模型进行评估.所提出的模型比其他模型需要更少的负载和处理时间。
    The ability to recognize the surface type is crucial for both indoor and outdoor mobile robots. Knowing the surface type can help indoor mobile robots move more safely and adjust their movement accordingly. However, recognizing surface characteristics is challenging since similar planes can appear substantially different; for instance, carpets come in various types and colors. To address this inherent uncertainty in vision-based surface classification, this study first generates a new, unique data set composed of 2,081 surface images (carpet, tiles, and wood) captured in different indoor environments. Secondly, the pre-trained state-of-the-art deep learning models, namely InceptionV3, VGG16, VGG19, ResNet50, Xception, InceptionResNetV2, and MobileNetV2, were utilized to recognize the surface type. Additionally, a lightweight MobileNetV2-modified model was proposed for surface classification. The proposed model has approximately four times fewer total parameters than the original MobileNetV2 model, reducing the size of the trained model weights from 42 MB to 11 MB. Thus, the proposed model can be used in robotic systems with limited computational capacity and embedded systems. Lastly, several optimizers, such as SGD, RMSProp, Adam, Adadelta, Adamax, Adagrad, and Nadam, are applied to distinguish the most efficient network. Experimental results demonstrate that the proposed model outperforms all other applied methods and existing approaches in the literature by achieving 99.52% accuracy and an average score of 99.66% in precision, recall, and F1-score. In addition to this, the proposed lightweight model was tested in real-time on a mobile robot in 11 scenarios consisting of various indoor environments such as offices, hallways, and homes, resulting in an accuracy of 99.25%. Finally, each model was evaluated in terms of model loading time and processing time. The proposed model requires less loading and processing time than the other models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究论文提出了一种使用MRI扫描进行脑肿瘤诊断的创新方法,利用深度学习和元启发式算法的强大功能。这项研究采用了Mobilenetv2,一种深度学习模型,通过一种称为收缩福克斯优化算法(MN-V2/CFO)的新颖元启发式算法进行优化。这种方法允许Mobilenetv2超参数的最佳选择,提高肿瘤检测的准确性。该模型在Figshare数据集上实现,全面的核磁共振扫描,并将其性能与其他过程进行了验证,将结果与一些已发表的作品进行比较,包括网络(RN),小波变换,和深度学习(WT/DL),定制VGG19和卷积神经网络(CNN)。研究结果,突出了提出的MN-V2/CFO模型与其他战术相比的优越性能。推荐的策略达到了97.68%的精度,F1得分为86.22%,灵敏度为80.12%,准确率为97.32%。研究结果验证了所提出的模型在彻底改变脑肿瘤诊断方面的潜力,有助于更好的治疗策略,改善患者预后。
    This research paper presents an innovative approach to brain tumor diagnosis using MRI scans, using the power of deep learning and metaheuristic algorithm. The study employs Mobilenetv2, a deep learning model, optimized by a novel metaheuristic known as the Contracted Fox Optimization Algorithm (MN-V2/CFO). This methodology allows for the optimal selection of Mobilenetv2 hyperparameters, enhancing the accuracy of tumor detection. The model is implemented on the Figshare dataset, a comprehensive collection of MRI scans, and its performance is validated against other processes the results are compared with some published works including Network (RN), wavelet transform, and deep learning (WT/DL), customized VGG19, and Convolutional neural network (CNN). The results of the study, highlight the superior performance of the proposed MN-V2/CFO model compared to other tactics. The recommended strategy achieves a precision of 97.68 %, an F1-score of 86.22 %, a sensitivity of 80.12 %, and an accuracy of 97.32 %. The findings validate the potential of the proposed model in revolutionizing brain tumor diagnosis, contributing to better treatment strategies, and improving patient outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    乳腺癌(BC)是全球女性死亡的主要原因之一。因此,及时识别是成功治疗和良好生存率的关键.迁移学习(TL)方法最近显示出有助于早期认识BC的希望。在这项工作中,三个TL模型,将MobileNetV2、ResNet50和VGG16与LSTM组合以从超声图像(USI)中提取特征。此外,采用具有Tomek(SMOTETomek)的合成少数过采样技术(SMOTE)来平衡提取的特征。所提出的方法与VGG16实现了99.0%的F1得分,马修斯相关系数(MCC)和Kappa系数为98.9%,曲线下面积(AUC)为1.0。K-fold方法用于交叉验证并且获得96%的平均F1得分。此外,梯度加权类激活映射(Grad-CAM)方法用于可视化,并将本地可解释模型不可知解释(LIME)方法应用于可解释性。使用正态近似间隔(NAI)和自举方法来计算置信区间(CI)。所提出的方法实现了LowerCI(LCI),UpperCI(UCI),MeanCI(MCI)为96.50%,99.75%,98.13%,分别,与NAI,而95%的LCI为93.81%,UCI为96.00%,引导方法的引导均值为94.90%。此外,六种最先进的(SOTA)TL模型的性能,比如Xception,NASNetMobile,将InceptionResNetV2,MobileNetV2,ResNet50和VGG16与所提出的方法进行了比较。
    Breast Cancer (BC) is one of the top reasons for fatality in women worldwide. As a result, timely identification is critical for successful therapy and excellent survival rates. Transfer Learning (TL) approaches have recently shown promise in aiding in the early recognition of BC. In this work, three TL models, MobileNetV2, ResNet50, and VGG16, were combined with LSTM to extract the features from Ultrasound Images (USIs). Furthermore, the Synthetic Minority Over-sampling Technique (SMOTE) with Tomek (SMOTETomek) was employed to balance the extracted features. The proposed method with VGG16 achieved an F1 score of 99.0 %, Matthews Correlation Coefficient (MCC) and Kappa Coefficient of 98.9 % with an Area Under Curve (AUC) of 1.0. The K-fold method was applied for cross-validation and achieved an average F1 score of 96 %. Moreover, the Gradient-weighted Class Activation Mapping (Grad-CAM) method was applied for visualization, and the Local Interpretable Model-agnostic Explanations (LIME) method was applied for interpretability. The Normal Approximation Interval (NAI) and bootstrapping methods were used to calculate Confidence Intervals (CIs). The proposed method achieved a Lower CI (LCI), Upper CI (UCI), and Mean CI (MCI) of 96.50 %, 99.75 %, and 98.13 %, respectively, with the NAI, while 95 % LCI of 93.81 %, an UCI of 96.00 %, and a bootstrap mean of 94.90 % with the bootstrap method. Furthermore, the performance of the six state-of-the-art (SOTA) TL models, such as Xception, NASNetMobile, InceptionResNetV2, MobileNetV2, ResNet50, and VGG16, were compared with the proposed method.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号