Deep metric learning

  • 文章类型: Journal Article
    锥虫病,南美的一个重大健康问题,南亚,东南亚,需要积极调查才能有效控制疾病。为了解决这个问题,我们开发了一种结合深度度量学习(DML)和图像检索的混合模型。该模型精通在薄血膜检查的显微镜图像中识别锥虫物种。利用ResNet50骨干神经网络,经过训练的模型表现突出,准确率超过99.71%,召回率高达96%。承认在现场场景中需要自动化工具,我们展示了我们的模型作为自主筛查方法的潜力.这是通过使用流行的卷积神经网络(CNN)应用程序来实现的,和KNN算法返回的基于矢量数据库的图像。这一成就主要归因于三元组裕度损失函数的实现,精度为98%。在五次交叉验证中展示的模型的鲁棒性突出了ResNet50神经网络,基于DML,作为最先进的CNN模型,AUC>98%。DML的采用显着提高了模型的性能,保持不受数据集变化的影响,并使其成为实地考察研究的有用工具。与传统分类模型相比,DML在管理具有大量类的大规模数据集方面提供了若干优势,增强可扩展性。该模型有能力推广到训练期间没有遇到的新颖课程,证明在新类可能不断出现的情况下特别有利。它也非常适合需要精确识别的应用,特别是在区分密切相关的类。此外,DML对与班级不平衡有关的问题表现出更大的弹性,因为它专注于学习距离或相似性,对这种不平衡更宽容。这些贡献使得DML模型的有效性和实用性,特别是在田野调查中。
    Trypanosomiasis, a significant health concern in South America, South Asia, and Southeast Asia, requires active surveys to effectively control the disease. To address this, we have developed a hybrid model that combines deep metric learning (DML) and image retrieval. This model is proficient at identifying Trypanosoma species in microscopic images of thin-blood film examinations. Utilizing the ResNet50 backbone neural network, a trained-model has demonstrated outstanding performance, achieving an accuracy exceeding 99.71 % and up to 96 % in recall. Acknowledging the necessity for automated tools in field scenarios, we demonstrated the potential of our model as an autonomous screening approach. This was achieved by using prevailing convolutional neural network (CNN) applications, and vector database based-images returned by the KNN algorithm. This achievement is primarily attributed to the implementation of the Triplet Margin Loss function as 98 % of precision. The robustness of the model demonstrated in five-fold cross-validation highlights the ResNet50 neural network, based on DML, as a state-of-the-art CNN model as AUC >98 %. The adoption of DML significantly improves the performance of the model, remaining unaffected by variations in the dataset and rendering it a useful tool for fieldwork studies. DML offers several advantages over conventional classification model to manage large-scale datasets with a high volume of classes, enhancing scalability. The model has the capacity to generalize to novel classes that were not encountered during training, proving particularly advantageous in scenarios where new classes may consistently emerge. It is also well suited for applications requiring precise recognition, especially in discriminating between closely related classes. Furthermore, the DML exhibits greater resilience to issues related to class imbalance, as it concentrates on learning distances or similarities, which are more tolerant to such imbalances. These contributions significantly make the effectiveness and practicality of DML model, particularly in in fieldwork research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在放射学解释期间,放射科医师从医学图像的元数据中读取患者标识符以识别被检查的患者。然而,对于放射科医生来说,识别“不正确的”元数据和患者识别错误是一项挑战。我们提出了一种方法,该方法使用患者重新识别技术将正确的元数据链接到丢失或错误分配元数据的躯干计算机断层扫描图像的图像集。该方法基于特征向量匹配技术,该技术使用深度特征提取器来适应侦察计算机断层扫描图像数据集中包含的跨供应商域。要识别“不正确”元数据,我们计算了随访图像与关联到正确元数据的已存储基线图像之间的最高相似性得分.重新识别性能测试相似性得分最高的图像是否属于同一患者,即,附加到图像的元数据是否正确。相同“正确”患者的随访图像和基线图像之间的相似性得分通常高于“不正确”患者。所提出的特征提取器具有足够的鲁棒性,可以在不进行额外训练的情况下提取单个可区分特征,即使是未知的侦察员计算机断层扫描图像。此外,所提出的增强技术通过合并由于每次检查期间患者表高度的变化而导致的宽度放大倍数的变化,进一步改善了不同供应商的子集的重新识别性能.我们认为,使用所提出的方法进行元数据检查将有助于检测由于不可避免的错误(例如人为错误)而分配的“不正确”患者标识符的元数据。
    During radiologic interpretation, radiologists read patient identifiers from the metadata of medical images to recognize the patient being examined. However, it is challenging for radiologists to identify \"incorrect\" metadata and patient identification errors. We propose a method that uses a patient re-identification technique to link correct metadata to an image set of computed tomography images of a trunk with lost or wrongly assigned metadata. This method is based on a feature vector matching technique that uses a deep feature extractor to adapt to the cross-vendor domain contained in the scout computed tomography image dataset. To identify \"incorrect\" metadata, we calculated the highest similarity score between a follow-up image and a stored baseline image linked to the correct metadata. The re-identification performance tests whether the image with the highest similarity score belongs to the same patient, i.e., whether the metadata attached to the image are correct. The similarity scores between the follow-up and baseline images for the same \"correct\" patients were generally greater than those for \"incorrect\" patients. The proposed feature extractor was sufficiently robust to extract individual distinguishable features without additional training, even for unknown scout computed tomography images. Furthermore, the proposed augmentation technique further improved the re-identification performance of the subset for different vendors by incorporating changes in width magnification due to changes in patient table height during each examination. We believe that metadata checking using the proposed method would help detect the metadata with an \"incorrect\" patient identifier assigned due to unavoidable errors such as human error.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    未知疾病的出现通常很少或没有可用的样品。零射学习和少射学习在医学图像分析中具有广阔的应用前景。在本文中,我们提出了一种跨模态深度度量学习广义零分学习(CM-DML-GZSL)模型。拟议的网络由视觉特征提取器组成,固定的语义特征提取器,和深度回归模块。该网络属于用于多种模态的双流网络。在多标签设置中,每个样本平均包含少量阳性标签和大量阴性标签。这种正负不平衡主导了优化过程,并可能阻止在训练期间建立视觉特征和语义向量之间的有效对应关系。导致精度较低。在这方面引入了一种新颖的加权聚焦欧几里得距离度量损失。这种损失不仅可以动态增加硬质样品的重量,而且可以减少简单样品的重量,但它也可以促进样本和与其正标签相对应的语义向量之间的联系,这有助于减轻在广义零拍学习设置中预测看不见的类的偏差。加权聚焦欧氏距离度量损失函数可以动态调整样本权重,为胸部X光诊断提供零拍多标签学习,正如在大型公开数据集上的实验结果表明的那样。
    The emergence of unknown diseases is often with few or no samples available. Zero-shot learning and few-shot learning have promising applications in medical image analysis. In this paper, we propose a Cross-Modal Deep Metric Learning Generalized Zero-Shot Learning (CM-DML-GZSL) model. The proposed network consists of a visual feature extractor, a fixed semantic feature extractor, and a deep regression module. The network belongs to a two-stream network for multiple modalities. In a multi-label setting, each sample contains a small number of positive labels and a large number of negative labels on average. This positive-negative imbalance dominates the optimization procedure and may prevent the establishment of an effective correspondence between visual features and semantic vectors during training, resulting in a low degree of accuracy. A novel weighted focused Euclidean distance metric loss is introduced in this regard. This loss not only can dynamically increase the weight of hard samples and decrease the weight of simple samples, but it can also promote the connection between samples and semantic vectors corresponding to their positive labels, which helps mitigate bias in predicting unseen classes in the generalized zero-shot learning setting. The weighted focused Euclidean distance metric loss function can dynamically adjust sample weights, enabling zero-shot multi-label learning for chest X-ray diagnosis, as experimental results on large publicly available datasets demonstrate.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在人脸识别系统中,光的方向,反射,面部的情绪和身体变化是使识别困难的一些主要因素。研究人员继续致力于基于深度学习的算法来克服这些困难。开发能够高精度工作并降低计算成本的模型至关重要,尤其是在实时人脸识别系统中。称为代表性学习的深度度量学习算法在该领域中经常是优选的。然而,除了提取突出的代表性特征外,这些特征向量的适当分类也是影响性能的重要因素。本研究中的场景变化指标(SCI)旨在通过深度度量学习模型来降低或消除滑动窗口中的错误识别率。该模型检测场景不改变的块,并尝试更精确地用新值识别分类器阶段中使用的比较阈值。增加跨不变场景块的灵敏度比率允许在数据库中的样本之间进行更少的比较。实验研究中提出的模型与原始深度度量学习模型相比,准确率达到了99.25%,F-1得分值达到了99.28%。实验结果表明,即使在不变的场景中,同一个人的面部图像存在差异,错误识别可以被最小化,因为被比较的样本面积被缩小。
    In face recognition systems, light direction, reflection, and emotional and physical changes on the face are some of the main factors that make recognition difficult. Researchers continue to work on deep learning-based algorithms to overcome these difficulties. It is essential to develop models that will work with high accuracy and reduce the computational cost, especially in real-time face recognition systems. Deep metric learning algorithms called representative learning are frequently preferred in this field. However, in addition to the extraction of outstanding representative features, the appropriate classification of these feature vectors is also an essential factor affecting the performance. The Scene Change Indicator (SCI) in this study is proposed to reduce or eliminate false recognition rates in sliding windows with a deep metric learning model. This model detects the blocks where the scene does not change and tries to identify the comparison threshold value used in the classifier stage with a new value more precisely. Increasing the sensitivity ratio across the unchanging scene blocks allows for fewer comparisons among the samples in the database. The model proposed in the experimental study reached 99.25% accuracy and 99.28% F-1 score values ​​compared to the original deep metric learning model. Experimental results show that even if there are differences in facial images of the same person in unchanging scenes, misrecognition can be minimized because the sample area being compared is narrowed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    从临床图像中提取的生物指纹可用于患者身份验证,以确定图片存档和通信系统中的错误归档的临床图像。然而,这些方法尚未纳入临床使用,并且它们的性能会随着临床图像的变化而降低。深度学习可以用来提高这些方法的性能。提出了一种新颖的方法,可以使用后前(PA)和前后(AP)胸部X射线图像在被检查的患者中自动识别个体。所提出的方法使用基于深度卷积神经网络(DCNN)的深度度量学习来克服患者验证和识别的极端分类要求。它在NIH胸部X射线数据集(ChestX-ray8)上进行了三个步骤的训练:预处理,具有EfficientNetV2-S骨干的DCNN特征提取,和深度度量学习的分类。所提出的方法是使用两个公共数据集和两个临床胸部X射线图像数据集进行评估,这些数据集包含来自接受筛查和医院护理的患者的数据。在包含PA和AP视图位置的PadChest数据集上,预训练了300个时期的1280维特征提取器在接收器工作特性曲线下的面积为0.9894,误差率为0.0269,并且前1精度为0.839时表现最佳。这项研究的结果为自动患者识别的发展提供了相当多的见解,以减少由于人为错误而导致医疗事故的可能性。
    Biological fingerprints extracted from clinical images can be used for patient identity verification to determine misfiled clinical images in picture archiving and communication systems. However, such methods have not been incorporated into clinical use, and their performance can degrade with variability in the clinical images. Deep learning can be used to improve the performance of these methods. A novel method is proposed to automatically identify individuals among examined patients using posteroanterior (PA) and anteroposterior (AP) chest X-ray images. The proposed method uses deep metric learning based on a deep convolutional neural network (DCNN) to overcome the extreme classification requirements for patient validation and identification. It was trained on the NIH chest X-ray dataset (ChestX-ray8) in three steps: preprocessing, DCNN feature extraction with an EfficientNetV2-S backbone, and classification with deep metric learning. The proposed method was evaluated using two public datasets and two clinical chest X-ray image datasets containing data from patients undergoing screening and hospital care. A 1280-dimensional feature extractor pretrained for 300 epochs performed the best with an area under the receiver operating characteristic curve of 0.9894, an equal error rate of 0.0269, and a top-1 accuracy of 0.839 on the PadChest dataset containing both PA and AP view positions. The findings of this study provide considerable insights into the development of automated patient identification to reduce the possibility of medical malpractice due to human errors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:结直肠癌是全球癌症相关死亡的主要原因。预防CRC的最佳方法是结肠镜检查。然而,并非所有结肠息肉都有癌变的风险。因此,息肉使用不同的分类系统进行分类。分类后,进一步的治疗和程序是基于息肉的分类。然而,分类并不容易。因此,我们建议使用两种新型自动分类系统,帮助胃肠病学家根据NICE和Paris分类对息肉进行分类.
    方法:我们建立了两个分类系统。一种是根据息肉的形状对息肉进行分类(巴黎)。另一种是根据息肉的纹理和表面图案(NICE)对息肉进行分类。介绍了巴黎分类的两步过程:首先,检测和裁剪图像上的息肉,其次,用变压器网络根据裁剪区域对息肉进行分类。对于NICE分类,我们设计了一种基于深度度量学习方法的少射学习算法。该算法为息肉创建了一个嵌入空间,它允许从几个例子中进行分类,以说明我们数据库中NICE注释图像的数据稀缺性。
    结果:对于巴黎分类,我们达到了89.35%的准确率,超越了文献中的所有论文,并为公共数据集上的其他出版物建立了新的最新技术和基线准确性。对于NICE分类,我们获得了81.13%的竞争准确率,从而证明了在数据稀缺的环境中,少射学习范式在息肉分类中的可行性.此外,我们展示了算法的不同烧蚀。最后,通过显示解释神经激活的神经网络的热图,我们进一步阐述了系统的可解释性。
    结论:总的来说,我们介绍了两种息肉分类系统来帮助胃肠病学家。我们在巴黎分类中实现了最先进的性能,并在NICE分类中展示了少射学习范式的可行性,解决医疗机器学习中面临的普遍数据稀缺问题。
    Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification.
    We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database.
    For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations.
    Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    单细胞RNA测序(scRNA-seq)显着加速了复杂组织和生物体中不同细胞谱系和类型的实验表征。细胞类型注释在大多数scRNA-seq分析管道中非常重要。然而,手动细胞类型注释很大程度上依赖于scRNA-seq数据和标记基因的质量,因此可能是费力和耗时的。此外,scRNA-seq数据集的异质性对准确的细胞类型注释提出了另一个挑战,例如由不同的scRNA-seq方案和样品诱导的分批效应。为了克服这些限制,在这里,我们提出了一个新的管道,叫做TripletCell,对于跨物种,跨方案和跨样本细胞类型注释。我们在TripletCell中开发了用于特征提取(FE)的单元嵌入和降维模块,即TripletCell-FE,利用基于深度度量学习的算法来处理参考基因表达矩阵和查询细胞之间的关系。我们对21个数据集的实验研究(涵盖9个scRNA-seq方案,两个物种和三个组织)证明TripletCell优于最先进的细胞类型注释方法。更重要的是,不管协议或物种,TripletCell可以在注释不同类型的细胞方面提供出色而强大的性能。TripletCell可在https://github.com/liuyan3056/TripletCell上免费获得。我们相信TripletCell是使用scRNA-seq数据准确注释各种细胞类型的可靠计算工具,并将有助于在细胞生物学中产生新的生物学假设。
    Single-cell RNA sequencing (scRNA-seq) has significantly accelerated the experimental characterization of distinct cell lineages and types in complex tissues and organisms. Cell-type annotation is of great importance in most of the scRNA-seq analysis pipelines. However, manual cell-type annotation heavily relies on the quality of scRNA-seq data and marker genes, and therefore can be laborious and time-consuming. Furthermore, the heterogeneity of scRNA-seq datasets poses another challenge for accurate cell-type annotation, such as the batch effect induced by different scRNA-seq protocols and samples. To overcome these limitations, here we propose a novel pipeline, termed TripletCell, for cross-species, cross-protocol and cross-sample cell-type annotation. We developed a cell embedding and dimension-reduction module for the feature extraction (FE) in TripletCell, namely TripletCell-FE, to leverage the deep metric learning-based algorithm for the relationships between the reference gene expression matrix and the query cells. Our experimental studies on 21 datasets (covering nine scRNA-seq protocols, two species and three tissues) demonstrate that TripletCell outperformed state-of-the-art approaches for cell-type annotation. More importantly, regardless of protocols or species, TripletCell can deliver outstanding and robust performance in annotating different types of cells. TripletCell is freely available at https://github.com/liuyan3056/TripletCell. We believe that TripletCell is a reliable computational tool for accurately annotating various cell types using scRNA-seq data and will be instrumental in assisting the generation of novel biological hypotheses in cell biology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着COVID-19的传播,对身体状况的远程检测的需求正在增加,例如,在几种情况下,必须远程测量体温以检测发热个体。为了远程检测身体状况,这项研究试图研究基于面部颜色和皮肤温度的异常检测,是与血液动力学相关的指标。利用三重态缺失从面部图像中提取与主观健康感受相关的特征,评价主观健康感受与面部图像之间是否存在相关关系。还尝试了根据这些特征对与不良身体状况有关的主观健康感觉进行分类。为了获得数据,进行了大约1年的实验来测量面部视觉和热图像,以及与身体状况有关的主观感受。根据主观健康感受定义异常水平。通过基于主观健康感受对异常和正常数据进行分类,构建异常检测模型。将面部可见和热图像应用于训练模型,以定量评估与主观健康相关的异常情况分类的准确性。在更高水平的异常,面部可视和热图像的组合导致主观健康感觉的分类具有中等准确性。Further,结果表明,眼睛和鼻子的两侧可能表明主观健康的感觉。
    With the spread of COVID-19, the need for remote detection of physical conditions is increasing, for example, there are several situations wherein the body temperature has to be measured remotely to detect febrile individuals. Aiming to remotely detect physical conditions, the study attempted to investigate anomaly detection based on facial color and skin temperature, which are indicators related to hemodynamics. Triplet loss was used to extract features related to subjective health feelings from facial images to evaluate whether there is a relationship between subjective health feelings and facial images. A classification of subjective health feelings related to poor physical conditions based on these features was also attempted. To obtain the data, an experiment was conducted for approximately 1 year to measure facial visual and thermal images, and subjective feelings related to physical conditions. Anomaly levels were defined based on subjective health feelings. Anomaly detection models were constructed by classifying anomaly and normal data based on subjective health feelings. Facial visible and thermal images were applied to the trained model to quantitatively evaluate the accuracy of the classification of anomaly conditions related to subjective health. At higher levels of anomaly, a combination of facial visible and thermal images resulted in the classification of subjective health feelings with moderate accuracy. Further, the results suggest that the eyes and sides of the nose may indicate subjective health feelings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人脸识别是一种被广泛接受的生物识别,因为面部包含了很多关于一个人的身份信息。这项研究的目标是将个人的3D面部与一组人口统计属性(性别,年龄,BMI,和基因组背景)是从未鉴定的遗传物质中提取的。我们引入了三元组损失度量学习器,该学习器将面部形状压缩为较低维度的嵌入,同时保留有关感兴趣属性的信息。针对多个面部片段训练度量学习器,以允许对面部进行全局到局部的基于部分的分析。要直接从3D网格数据中学习,螺旋卷积与一种新的网格采样方案一起使用,它在不同的分辨率下保持均匀采样点。通过将所有属性的嵌入登记到支持向量机分类器或回归器中,然后使用朴素贝叶斯得分融合器将它们组合,来评估模型从面部形状针对探针人口统计信息列表建立身份的能力。通过生物特征验证和识别的10倍交叉验证获得的结果表明,基于部分的学习显着提高了使用我们的几何度量学习器或主成分分析进行编码的系统性能。
    Face recognition is a widely accepted biometric identifier, as the face contains a lot of information about the identity of a person. The goal of this study is to match the 3D face of an individual to a set of demographic properties (sex, age, BMI, and genomic background) that are extracted from unidentified genetic material. We introduce a triplet loss metric learner that compresses facial shape into a lower dimensional embedding while preserving information about the property of interest. The metric learner is trained for multiple facial segments to allow a global-to-local part-based analysis of the face. To learn directly from 3D mesh data, spiral convolutions are used along with a novel mesh-sampling scheme, which retains uniformly sampled points at different resolutions. The capacity of the model for establishing identity from facial shape against a list of probe demographics is evaluated by enrolling the embeddings for all properties into a support vector machine classifier or regressor and then combining them using a naive Bayes score fuser. Results obtained by a 10-fold cross-validation for biometric verification and identification show that part-based learning significantly improves the systems performance for both encoding with our geometric metric learner or with principal component analysis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    快速识别植物病害对于有效缓解和控制其对植物的影响至关重要。用于植物病害自动识别,基于深度学习算法的植物叶片图像分类是目前最准确、最流行的方法。现有方法依赖于大量图像标注数据的采集,无法灵活调整识别类别,而我们开发了一种用于自动检测的新图像检索系统,本地化,以及在开放环境中识别单个叶片疾病,即,新增加的疾病类型无需再培训即可识别。在本文中,我们首先优化YOLOv5算法,增强对小物体的识别能力,这有助于更准确地提取叶子对象;其次,将分类识别与度量学习相结合,联合学习对图像进行分类和相似性测量,因此,利用可用图像分类模型的预测能力;最后,构建高效、灵活的图像检索系统,快速确定叶部病害类型。我们在三个公开的叶片疾病数据集上展示了详细的实验结果,并证明了我们系统的有效性。这项工作为促进适用于智能农业和营养诊断等作物研究的植物疾病监测奠定了基础,健康状况监测,还有更多.
    Rapid identification of plant diseases is essential for effective mitigation and control of their influence on plants. For plant disease automatic identification, classification of plant leaf images based on deep learning algorithms is currently the most accurate and popular method. Existing methods rely on the collection of large amounts of image annotation data and cannot flexibly adjust recognition categories, whereas we develop a new image retrieval system for automated detection, localization, and identification of individual leaf disease in an open setting, namely, where newly added disease types can be identified without retraining. In this paper, we first optimize the YOLOv5 algorithm, enhancing recognition ability in small objects, which helps to extract leaf objects more accurately; secondly, integrating classification recognition with metric learning, jointly learning categorizing images and similarity measurements, thus, capitalizing on prediction ability of available image classification models; and finally, constructing an efficient and nimble image retrieval system to quickly determine leaf disease type. We demonstrate detailed experimental results on three publicly available leaf disease datasets and prove the effectiveness of our system. This work lays the groundwork for promoting disease surveillance of plants applicable to intelligent agriculture and to crop research such as nutrition diagnosis, health status surveillance, and more.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号