ResNet34

  • 文章类型: Journal Article
    植物叶部病害的识别在精准农业中至关重要,在推进农业现代化中发挥着举足轻重的作用。及时检测和诊断叶部病害的预防措施大大有助于提高农产品的数量和质量,从而促进精准农业的深入发展。然而,尽管植物叶部病害鉴定研究发展迅速,它仍然面临挑战,例如农业数据集不足以及基于深度学习的疾病识别模型具有大量训练参数和准确性不足的问题。针对上述问题,提出了一种基于改进SinGAN和改进ResNet34的植物叶部病害识别方法。首先,提出了一种改进的SinGAN,称为基于重建的单幅图像生成网络(ReSinGN),用于图像增强。该网络通过使用自动编码器代替SinGAN中的GAN来加快模型训练速度,并将卷积块注意模块(CBAM)集成到自动编码器中,以更准确地捕获图像中的重要特征和结构信息。ReSinGN中引入了随机像素改组,以使模型能够学习更丰富的数据表示,进一步增强生成图像的质量。其次,提出了一种改进的ResNet34用于植物叶片病害识别。这涉及将CBAM模块添加到ResNet34,以减轻参数共享的限制,用LeakyReLU激活函数代替ReLU激活函数来解决神经元死亡的问题,利用基于迁移学习的训练方法加快网络训练速度。本文以番茄叶部病害为实验对象,实验结果表明:(1)与SinGAN相比,ReSinGN生成的高质量图像的训练速度至少快44.6倍。(2)ReSinGN模型生成的图像的Tengrade得分为67.3,与SinGAN相比提高了30.2,产生更清晰的图像。(3)具有随机像素混洗的ReSinGN模型在图像清晰度和失真方面都优于SinGAN,实现图像清晰度和失真之间的最佳平衡。(4)改进的ResNet34实现了平均识别精度,识别精度,识别精度(冗余,因为它类似于精度),召回,F1得分为98.57、96.57、98.68、97.7和98.17%,分别,用于番茄叶部病害鉴定。与原始ResNet34相比,这代表了3.65、4.66、0.88、4.1和2.47%的增强,分别。
    The identification of plant leaf diseases is crucial in precision agriculture, playing a pivotal role in advancing the modernization of agriculture. Timely detection and diagnosis of leaf diseases for preventive measures significantly contribute to enhancing both the quantity and quality of agricultural products, thereby fostering the in-depth development of precision agriculture. However, despite the rapid development of research on plant leaf disease identification, it still faces challenges such as insufficient agricultural datasets and the problem of deep learning-based disease identification models having numerous training parameters and insufficient accuracy. This paper proposes a plant leaf disease identification method based on improved SinGAN and improved ResNet34 to address the aforementioned issues. Firstly, an improved SinGAN called Reconstruction-Based Single Image Generation Network (ReSinGN) is proposed for image enhancement. This network accelerates model training speed by using an autoencoder to replace the GAN in the SinGAN and incorporates a Convolutional Block Attention Module (CBAM) into the autoencoder to more accurately capture important features and structural information in the images. Random pixel Shuffling are introduced in ReSinGN to enable the model to learn richer data representations, further enhancing the quality of generated images. Secondly, an improved ResNet34 is proposed for plant leaf disease identification. This involves adding CBAM modules to the ResNet34 to alleviate the limitations of parameter sharing, replacing the ReLU activation function with LeakyReLU activation function to address the problem of neuron death, and utilizing transfer learning-based training methods to accelerate network training speed. This paper takes tomato leaf diseases as the experimental subject, and the experimental results demonstrate that: (1) ReSinGN generates high-quality images at least 44.6 times faster in training speed compared to SinGAN. (2) The Tenengrad score of images generated by the ReSinGN model is 67.3, which is improved by 30.2 compared to the SinGAN, resulting in clearer images. (3) ReSinGN model with random pixel Shuffling outperforms SinGAN in both image clarity and distortion, achieving the optimal balance between image clarity and distortion. (4) The improved ResNet34 achieved an average recognition accuracy, recognition precision, recognition accuracy (redundant as it\'s similar to precision), recall, and F1 score of 98.57, 96.57, 98.68, 97.7, and 98.17%, respectively, for tomato leaf disease identification. Compared to the original ResNet34, this represents enhancements of 3.65, 4.66, 0.88, 4.1, and 2.47%, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    脑癌是一种威胁生命的疾病,需要密切关注。使用无创医学成像的早期和准确诊断对于成功治疗和患者生存至关重要。然而,由放射科医师专家进行手动诊断是耗时的,并且在有效处理大型数据集时存在局限性。因此,迫切需要能够分析大量医疗数据以进行早期肿瘤检测的有效系统。具有深度卷积神经网络(DCNN)的深度学习(DL)成为通过医学成像方式了解脑癌等疾病的有前途的工具。尤其是核磁共振,它提供了详细的软组织对比,用于可视化肿瘤和器官。DL技巧在今朝脑肿瘤检测的研讨中愈来愈普遍。与需要手动特征提取的传统机器学习方法不同,DL模型擅长处理复杂数据,如MRI,擅长分类任务,使它们非常适合医学图像分析应用。这项研究提出了一种新颖的双DCNN模型,可以准确地对癌性和非癌性MRI样本进行分类。我们的双DCNN模型使用两个性能良好的DL模型,即,inceptionV3和denseNet121。通过附加全局最大池化层从这些模型中提取特征。然后利用所提取的特征来训练添加五个完全连接的层的模型,并且最终将MRI样本准确地分类为癌性或非癌性的。完全连接的层被重新训练以学习提取的特征以获得更好的准确性。该技术达到99%,99%,98%,99%的准确度,精度,召回,和F1分数,分别。此外,这项研究将双DCNN的性能与各种著名的DL模型进行了比较,包括DenseNet121、InceptionV3、ResNet架构、EfficientNetB2,SqueezeNet,VGG16,AlexNet,和LeNet-5,学习率不同。这项研究表明,我们提出的方法在性能方面优于这些已建立的模型。
    Brain cancer is a life-threatening disease requiring close attention. Early and accurate diagnosis using non-invasive medical imaging is critical for successful treatment and patient survival. However, manual diagnosis by radiologist experts is time-consuming and has limitations in processing large datasets efficiently. Therefore, efficient systems capable of analyzing vast amounts of medical data for early tumor detection are urgently needed. Deep learning (DL) with deep convolutional neural networks (DCNNs) emerges as a promising tool for understanding diseases like brain cancer through medical imaging modalities, especially MRI, which provides detailed soft tissue contrast for visualizing tumors and organs. DL techniques have become more and more popular in current research on brain tumor detection. Unlike traditional machine learning methods requiring manual feature extraction, DL models are adept at handling complex data like MRIs and excel in classification tasks, making them well-suited for medical image analysis applications. This study presents a novel Dual DCNN model that can accurately classify cancerous and non-cancerous MRI samples. Our Dual DCNN model uses two well-performed DL models, i.e., inceptionV3 and denseNet121. Features are extracted from these models by appending a global max pooling layer. The extracted features are then utilized to train the model with the addition of five fully connected layers and finally accurately classify MRI samples as cancerous or non-cancerous. The fully connected layers are retrained to learn the extracted features for better accuracy. The technique achieves 99%, 99%, 98%, and 99% of accuracy, precision, recall, and f1-scores, respectively. Furthermore, this study compares the Dual DCNN\'s performance against various well-known DL models, including DenseNet121, InceptionV3, ResNet architectures, EfficientNetB2, SqueezeNet, VGG16, AlexNet, and LeNet-5, with different learning rates. This study indicates that our proposed approach outperforms these established models in terms of performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    很多不发达国家,尤其是在非洲,与癌症相关的斗争,致命的疾病。尤其是女性,由于无知和延误诊断,乳腺癌的发病率每天都在上升。只有通过正确识别和诊断癌症在其发展的早期阶段可以有效地治疗。借助计算机辅助诊断和医学图像分析技术,可以加速和自动化癌症的分类。这项研究提供了使用来自残差网络18(ResNet18)和残差网络34(Resnet34)架构的迁移学习来检测乳腺癌。该研究研究了如何使用ResNet18和ResNet34的迁移学习在乳房乳房X线照相术中识别乳腺癌,并使用具有最佳验证准确性的经过训练的模型为放射科医生开发了演示应用程序。对于图像的多类分类,该研究对良性或恶性肿瘤病例的二元分类的平均准确度为ResNet34的86.7%验证准确度和ResNet18的92%验证准确度.已经创建了一个展示ResNet18性能的原型Web应用程序。获得的结果表明,迁移学习如何提高乳腺癌检测的准确性,为医疗专业人员提供宝贵的帮助,特别是在非洲的情况下。 .
    A lot of underdeveloped nations particularly in Africa struggle with cancer-related, deadly diseases. Particularly in women, the incidence of breast cancer is rising daily because of ignorance and delayed diagnosis. Only by correctly identifying and diagnosing cancer in its very early stages of development can be effectively treated. The classification of cancer can be accelerated and automated with the aid of computer-aided diagnosis and medical image analysis techniques. This research provides the use of transfer learning from a Residual Network 18 (ResNet18) and Residual Network 34 (ResNet34) architectures to detect breast cancer. The study examined how breast cancer can be identified in breast mammography pictures using transfer learning from ResNet18 and ResNet34, and developed a demo app for radiologists using the trained models with the best validation accuracy. 1, 200 datasets of breast x-ray mammography images from the National Radiological Society\'s (NRS) archives were employed in the study. The dataset was categorised as implant cancer negative, implant cancer positive, cancer negative and cancer positive in order to increase the consistency of x-ray mammography images classification and produce better features. For the multi-class classification of the images, the study gave an average accuracy for binary classification of benign or malignant cancer cases of 86.7% validation accuracy for ResNet34 and 92% validation accuracy for ResNet18. A prototype web application showcasing ResNet18 performance has been created. The acquired results show how transfer learning can improve the accuracy of breast cancer detection, providing invaluable assistance to medical professionals, particularly in an African scenario.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    早期肺癌的临床特征通常是存在孤立的肺结节。每年检查数以千计的病例,一个病例通常包含许多肺部CT切片。由于早期微观肺结节的尺寸较小和表征能力有限,因此需要对其进行检测和分类。因此,对肺结节进行准确分类,需要一个性能良好且对微观肺结节敏感的肺结节分类模型。
    本文使用Resnet34网络作为基本分类模型。提出了一种新的级联肺结节分类方法,将肺结节分为6类,而不是传统的2或4类。它可以有效地分类六种不同的结节类型,包括磨玻璃和实性结节,良性和恶性结节,和主要为毛玻璃或固体成分的结节。
    在本文中,传统的多分类方法和本文提出的级联分类方法是使用临床上收集的真实肺结节数据进行测试的。测试结果表明,本研究的级联分类方法达到了80.04%的准确率,优于传统的多分类方法。
    与现有的肺结节良恶性分类方法不同,本文提出的方法可以更准确地将肺结节分为6类。同时,本文提出了一种快速、精确,和可靠的方法来分类六个不同类别的肺结节,与传统的多变量分类方法相比,提高了分类的准确性。
    UNASSIGNED: Early-stage lung cancer is typically characterized clinically by the presence of isolated lung nodules. Thousands of cases are examined each year, and one case usually contains numerous lung CT slices. Detecting and classifying early microscopic lung nodules is demanding due to their diminutive dimensions and restricted characterization capabilities. Therefore, a lung nodule classification model that performs well and is sensitive to microscopic lung nodules is needed to accurately classify lung nodules.
    UNASSIGNED: This paper uses the Resnet34 network as a basic classification model. A new cascade lung nodule classification method is proposed to classify lung nodules into 6 classes instead of the traditional 2 or 4 classes. It can effectively classify six different nodule types including ground-glass and solid nodules, benign and malignant nodules, and nodules with predominantly ground-glass or solid components.
    UNASSIGNED: In this paper, the traditional multi-classification method and the cascade classification method proposed in this paper were tested using real lung nodule data collected in the clinic. The test results demonstrate that the cascade classification method in this study achieves an accuracy of 80.04%, outperforming the conventional multi-classification approach.
    UNASSIGNED: Different from the existing methods for categorizing the benign and malignant nature of lung nodules, the approach presented in this paper can classify lung nodules into 6 categories more accurately. At the same time, This paper proposes a rapid, precise, and dependable approach for classifying six distinct categories of lung nodules, which increases the accuracy categorization compared with the traditional multivariate categorization method.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    慢性伤口极大地影响了生活质量。它需要比急性伤口更多的重症监护。与他们的医生安排后续预约以跟踪愈合情况。良好的伤口治疗促进愈合和减少问题。伤口护理需要精确可靠的伤口测量,以根据循证最佳实践优化患者治疗和结果。图像用于通过量化关键愈合参数来客观地评估伤口状态。然而,由于伤口类型和成像条件的高度多样性,伤口图像的鲁棒分割是复杂的。本研究提出并评估了一种用于医学图像中伤口分割的新型混合模型。该模型将先进的深度学习技术与传统的图像处理方法相结合,提高了伤口分割的准确性和可靠性。主要目标是通过利用两种范例的综合优势来克服现有分割方法(UNet)的局限性。在我们的调查中,我们引入了一种混合模型架构,其中ResNet34用作编码器,并且采用UNet作为解码器。ResNet34的深度表示学习和UNet的高效特征提取的结合产生了显著的好处。建筑设计成功地融合了高层和低层的特征,能够生成高精度和准确性的分割图。在将我们的模型实现到实际数据之后,我们能够确定以下联合交集(IOU)的值,骰子得分,精度分别为0.973、0.986和0.9736。根据取得的成果,所提出的方法比当前最先进的方法更精确和准确。
    Quality of life is greatly affected by chronic wounds. It requires more intensive care than acute wounds. Schedule follow-up appointments with their doctor to track healing. Good wound treatment promotes healing and fewer problems. Wound care requires precise and reliable wound measurement to optimize patient treatment and outcomes according to evidence-based best practices. Images are used to objectively assess wound state by quantifying key healing parameters. Nevertheless, the robust segmentation of wound images is complex because of the high diversity of wound types and imaging conditions. This study proposes and evaluates a novel hybrid model developed for wound segmentation in medical images. The model combines advanced deep learning techniques with traditional image processing methods to improve the accuracy and reliability of wound segmentation. The main objective is to overcome the limitations of existing segmentation methods (UNet) by leveraging the combined advantages of both paradigms. In our investigation, we introduced a hybrid model architecture, wherein a ResNet34 is utilized as the encoder, and a UNet is employed as the decoder. The combination of ResNet34\'s deep representation learning and UNet\'s efficient feature extraction yields notable benefits. The architectural design successfully integrated high-level and low-level features, enabling the generation of segmentation maps with high precision and accuracy. Following the implementation of our model to the actual data, we were able to determine the following values for the Intersection over Union (IOU), Dice score, and accuracy: 0.973, 0.986, and 0.9736, respectively. According to the achieved results, the proposed method is more precise and accurate than the current state-of-the-art.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    自2020年春季全球COVID-19爆发以来,在线教学已取代传统的课堂教学,成为教育学生的主要方法。在网上进行体育教学可能具有挑战性,因为教学生某些动作可能很困难,准确的学生流动性,和适当的锻炼任务。本文利用多个大数据集,提出了一个具有可持续发展特征的在线教学支持系统。该系统基于深度学习图像识别算法ResNet34,可以实时分析和纠正学生动作,舞蹈,篮球,和其他运动。通过将注意机制模块与原始ResNet34相结合,可以提高系统的检测精度。该系统的可持续性从以下事实中可以明显看出,即数据集可以随着新体育类别的出现而扩展,并且可以实时保持最新状态。根据实验,所提出的系统的目标识别精度,它结合了ResNet34和注意力机制,高于目前使用的其他几种方法。所提出的技术在准确性方面优于原始的ResNet34,精度,召回率下降4.1%,2.8%,和3.6%,分别。建议的方法显着改善了虚拟体育教学中的学生动作校正。
    Since the global COVID-19 outbreak in the spring of 2020, online instruction has replaced traditional classroom instruction as the main method of educating students. Teaching physical education online can be challenging, as it may be difficult to teach students certain movements, accurate student mobility, and appropriate exercise assignments. This paper proposed an online teaching support system with sustainable development features that utilize several large data sets. The system is based on the deep learning image recognition algorithm ResNet34, which can analyze and correct student actions in real-time for gymnastics, dance, basketball, and other sports. By combining the attention mechanism module with the original ResNet34, the detection precision of the system can be enhanced. The sustainability of the system is evident from the fact that the data set can be expanded in response to the emergence of new sports categories and can be kept current in real-time. According to experiments, the target identification accuracy of the proposed system, which combines ResNet34 and the attention mechanism, is higher than that of several other methods currently in use. The proposed techniques outperform the original ResNet34 in terms of accuracy, precision, and recall by 4.1%, 2.8%, and 3.6%, respectively. The suggested approach significantly improves student action correction in virtual sports instruction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    心血管疾病(CVD)已经成为人类共同的健康问题,心血管疾病的患病率和死亡率逐年上升.血压(BP)是人体重要的生理参数,也是预防和治疗CVD的重要生理指标。现有的间歇测量方法不能完全指示人体的真实BP状态,并且不能摆脱袖带的束缚感。因此,本研究提出了一种基于ResNet34框架的深度学习网络,用于仅使用有前景的PPG信号对BP进行连续预测。高质量的PPG信号经过一系列预处理后,首先通过多尺度特征提取模块,以扩展感知领域,增强对特征的感知能力。随后,然后通过叠加多个具有通道注意力的残差模块来提取有用的特征信息,以提高模型的准确性。最后,在训练阶段,采用Huber损失函数稳定迭代过程,得到模型的最优解。在MIMIC数据集的子集上,模型预测的SBP和DBP误差均符合AAMI标准,而DBP的精度达到了BHS标准的A级,SBP的精度几乎达到了BHS标准的A级。该方法验证了PPG信号结合深度神经网络在连续BP监测领域的潜力和可行性。此外,该方法易于在便携式设备中部署,并且更符合可穿戴式血压监测设备的未来趋势(例如,智能手机和智能手表)。
    Cardiovascular disease (CVD) has become a common health problem of mankind, and the prevalence and mortality of CVD are rising on a year-to-year basis. Blood pressure (BP) is an important physiological parameter of the human body and also an important physiological indicator for the prevention and treatment of CVD. Existing intermittent measurement methods do not fully indicate the real BP status of the human body and cannot get rid of the restraining feeling of a cuff. Accordingly, this study proposed a deep learning network based on the ResNet34 framework for continuous prediction of BP using only the promising PPG signal. The high-quality PPG signals were first passed through a multi-scale feature extraction module after a series of pre-processing to expand the perceptive field and enhance the perception ability on features. Subsequently, useful feature information was then extracted by stacking multiple residual modules with channel attention to increase the accuracy of the model. Lastly, in the training stage, the Huber loss function was adopted to stabilize the iterative process and obtain the optimal solution of the model. On a subset of the MIMIC dataset, the errors of both SBP and DBP predicted by the model met the AAMI standards, while the accuracy of DBP reached Grade A of the BHS standard, and the accuracy of SBP almost reached Grade A of the BHS standard. The proposed method verifies the potential and feasibility of PPG signals combined with deep neural networks in the field of continuous BP monitoring. Furthermore, the method is easy to deploy in portable devices, and it is more consistent with the future trend of wearable blood-pressure-monitoring devices (e.g., smartphones and smartwatches).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在这项工作中,已经开发出一种新的方法来检测鳄梨油中的掺假,方法是将光学图像及其处理与深度学习算法相结合。为此,制备用浓度为1%至15%(v/v)的精制橄榄油掺杂的鳄梨油样品。获得两组不同样品的图像,一个被认为是明亮的,另一个被认为是黑暗的,共获得1800张照片。为了在这两种情况下获得这些图像,相机的曝光或快门速度进行了修改(光照条件为1/30s,黑暗条件为1/500s)。使用残差神经网络(ResNet34)对获得的图像进行处理和分类。针对每种情况开发了不同的模型,在模型的盲目验证过程中,95%的图像被正确分类。
    In this work, a new method has been developed to detect adulterations in avocado oil by combining optical images and their treatment with deep learning algorithms. For this purpose, samples of avocado oil adulterated with refined olive oil at concentrations from 1 % to 15 % (v/v) were prepared. Two groups of images of the different samples were obtained, one in conditions considered as bright and the other as dark, obtaining a total of 1,800 photographs. To obtain these images under both conditions, the exposure or shutter speed of the camera was modified (1/30 s for light conditions and 1/500 s for dark conditions). A residual neural network (ResNet34) was used to process and classify the images obtained. A different model was developed for each condition, and during blind validation of the models, ∼95 % of the images were correctly classified.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:舌诊是中医特色之一,但是传统的舌诊受到很多因素的影响,其鉴别诊断结果尚未得到广泛认可。舌诊仪器的出现是舌诊现代化的产物,在临床实践中具有标准和客观的优势。在这项研究中,基于标准的舌头图像,构建了舌象数据集和检测模型。并基于深度学习卷积神经网络(CNN)算法和视觉问答技术,构建了用于舌图像识别的人机交互智能健康检测器。
    方法:在这项研究中,收集了1420张舌头图像。筛选后,专家判断他们,并对舌头图像进行注释以形成舌头图像数据集。然后提出了基于深度学习卷积神经网络(CNN)的人工智能网络框架,也就是说,将ResNet34应用于该数据集,自动提取图像特征,实现舌象分类。最后,将VGG16网络框架应用于数据集,对分类模型进行比较,并与分类效果进行比较。
    结果:在本文中,通过整理注释收集的舌头图像形成相关数据集,验证了ResNet34架构能够更好地完成牙痕和舌特征识别任务。与现有研究中的类似学习任务相比,本研究提出的牙印舌识别模型的准确率要高出10%以上,这表明CNN算法能够更准确有效地对印有牙齿的舌头进行区分。同时,使用数据集和模型结合视觉问答技术,设计了一种用于中医舌象识别的AI健康检测器,可以进行健康评估并向用户提供建议。
    结论:本研究采用基于深度学习的卷积神经网络模型,可以更快速、更方便地减少对舌特征的提取。同时,模型架构性能优异,泛化能力强,对用户健康状态的判断更加准确。
    OBJECTIVE: Tongue diagnosis is one of the characteristics of traditional Chinese medicine (TCM), but traditional tongue diagnosis is affected by many factors, and its differential diagnosis results are not widely recognized. The appearance of tongue diagnosis instruments is the product of the modernization of tongue diagnosis, and it has standard and objective advantages in clinical practice. In this study, based on standard tongue images, a tongue image dataset and detection model were constructed. And based on the deep learning convolutional neural network (CNN) algorithm and visual question answering technology, a human-computer interaction intelligent health detector for tongue image recognition is constructed.
    METHODS: In this research, 1420 tongue images were collected. After screening, experts judged them, and annotated the tongue images to form tongue image datasets. Then the artificial intelligence network framework based on deep learning convolutional neural network (CNN), that is, ResNet34, is applied to this dataset to automatically extract image features and realize tongue images classification. Finally, the VGG16 network framework is applied to the dataset to compare the classification model and compare with the classification effect.
    RESULTS: In this paper, relevant datasets were formed by collating the tongue images collected by annotation, which verified that the ResNet34 architecture could better perform the task of tooth mark and tongue feature recognition. Compared with similar learning tasks in existing studies, the accuracy of the teeth-printed tongue recognition model proposed in this study is more than 10% higher, which indicates that the CNN algorithm can distinguish teeth-printed tongue more accurately and effectively. At the same time, using datasets and models combined with visual question and answer technology, an AI health detector for TCM tongue image identification is designed, which can make health assessments and give suggestions to users.
    CONCLUSIONS: This study adopts a convolutional neural network model based on deep learning, which can reduce the extraction of tongue features more quickly and conveniently. At the same time, the model architecture has excellent performance and strong generalization ability and is more accurate in judging users\' health status.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    提出了一种基于人工智能的实时快速检测小扁豆粉掺假的方法。基于卷积神经网络和迁移学习的数学模型(即,ResNet34)已接受培训,以识别含有痕量小麦(麸质)或开心果(坚果)的小扁豆面粉样品,帮助两个相关人群(乳糜泻患者和坚果过敏患者,分别)。该技术基于对简单反射相机拍摄的照片的分析,并进一步分为按掺假剂类型和数量(最高50ppm)分配的组。训练了两种不同的算法,每种掺假物一个,每个神经网络总共使用2200张图像。使用盲集数据(收集的图像的10%;最初和随机分离)来评估模型的性能导致强大的性能,因为99.1%的含有开心果的扁豆面粉样品被正确分类,而对含有小麦粉的样品进行分类的准确率达到96.4%。
    An artificial intelligence-based method to rapidly detect adulterated lentil flour in real time is presented. Mathematical models based on convolutional neural networks and transfer learning (viz., ResNet34) have been trained to identify lentil flour samples that contain trace levels of wheat (gluten) or pistachios (nuts), aiding two relevant populations (people with celiac disease and with nut allergies, respectively). The technique is based on the analysis of photographs taken by a simple reflex camera and further classification into groups assigned to adulterant type and amount (up to 50 ppm). Two different algorithms were trained, one per adulterant, using a total of 2200 images for each neural network. Using blind sets of data (10% of the collected images; initially and randomly separated) to evaluate the performance of the models led to strong performances, as 99.1% of lentil flour samples containing ground pistachio were correctly classified, while 96.4% accuracy was reached to classify the samples containing wheat flour.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号