Skin lesion classification

皮肤病变分类
  • 文章类型: Journal Article
    皮肤癌是肿瘤学最重要的挑战之一,它的早期发现对于成功的治疗结果至关重要。传统的诊断方法依赖于皮肤科医生的专业知识,创造对更可靠的需求,自动化工具。本研究探索深度学习,特别是卷积神经网络(CNN),提高皮肤癌诊断的准确性和效率。利用HAM10000数据集,全面收集皮肤镜检查图像,涵盖各种皮肤病变,这项研究引入了一个复杂的CNN模型,为皮肤病变分类的细微任务量身定制。该模型的体系结构是复杂的设计与多个卷积,池化,和致密的层,旨在捕捉皮肤病变的复杂视觉特征。为了解决数据集中的类不平衡的挑战,采用了创新的数据增强策略,确保在训练期间每个病变类别的平衡表示。此外,本研究引入了一种具有优化的层配置和数据增强的CNN模型,显着提高皮肤癌检测的诊断精度。使用Adam优化器优化模型的学习过程,参数微调超过50个时期和128的批量大小,以增强模型的能力,以辨别图像数据中的微妙模式。模型检查点回调可确保保留最佳模型迭代以供将来使用。所提出的模型具有97.78%的精度,显著的精度为97.9%,召回97.9%,F2得分为97.8%,强调其作为皮肤癌早期检测和分类的强大工具的潜力,从而支持临床决策,并有助于改善皮肤科患者的预后。
    Skin cancer stands as one of the foremost challenges in oncology, with its early detection being crucial for successful treatment outcomes. Traditional diagnostic methods depend on dermatologist expertise, creating a need for more reliable, automated tools. This study explores deep learning, particularly Convolutional Neural Networks (CNNs), to enhance the accuracy and efficiency of skin cancer diagnosis. Leveraging the HAM10000 dataset, a comprehensive collection of dermatoscopic images encompassing a diverse range of skin lesions, this study introduces a sophisticated CNN model tailored for the nuanced task of skin lesion classification. The model\'s architecture is intricately designed with multiple convolutional, pooling, and dense layers, aimed at capturing the complex visual features of skin lesions. To address the challenge of class imbalance within the dataset, an innovative data augmentation strategy is employed, ensuring a balanced representation of each lesion category during training. Furthermore, this study introduces a CNN model with optimized layer configuration and data augmentation, significantly boosting diagnostic precision in skin cancer detection. The model\'s learning process is optimized using the Adam optimizer, with parameters fine-tuned over 50 epochs and a batch size of 128 to enhance the model\'s ability to discern subtle patterns in the image data. A Model Checkpoint callback ensures the preservation of the best model iteration for future use. The proposed model demonstrates an accuracy of 97.78% with a notable precision of 97.9%, recall of 97.9%, and an F2 score of 97.8%, underscoring its potential as a robust tool in the early detection and classification of skin cancer, thereby supporting clinical decision-making and contributing to improved patient outcomes in dermatology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    皮肤癌是一种致命的疾病,它的早期检测在防止其传播到其他身体器官和组织中起着关键作用。基于人工智能(AI)的自动化方法可以在其早期检测中发挥重要作用。这项研究提出了一种基于人工智能的新方法,被称为“DualAutoELM”,用于有效识别各种类型的皮肤癌。所提出的方法利用了自动编码器网络,包括两个不同的自动编码器:空间自动编码器和FFT(快速傅里叶变换)自动编码器。空间自动编码器专门学习输入病变图像内的空间特征,而FFT自动编码器通过重建过程学习捕获经变换的输入皮肤病变图像内的纹理和区分频率模式。在这些自动编码器的编码器部分内的各个级别处使用注意力模块显著地提高了它们的辨别特征学习能力。训练具有单层前馈的极限学习机(ELM),以使用从这些自动编码器的瓶颈层中恢复的特征对皮肤恶性肿瘤进行分类。“HAM10000”和“ISIC-2017”是两个公开可用的数据集,用于彻底评估建议的方法。实验结果证明了该技术的准确性和鲁棒性。AUC,精度,“HAM10000”数据集的精度值为0.98、97.68%和97.66%,对于“ISIC-2017”数据集,分别为0.95、86.75%和86.68%,分别。这项研究强调了准确检测皮肤癌的建议方法的可能性。
    Skin cancer is a lethal disease, and its early detection plays a pivotal role in preventing its spread to other body organs and tissues. Artificial Intelligence (AI)-based automated methods can play a significant role in its early detection. This study presents an AI-based novel approach, termed \'DualAutoELM\' for the effective identification of various types of skin cancers. The proposed method leverages a network of autoencoders, comprising two distinct autoencoders: the spatial autoencoder and the FFT (Fast Fourier Transform)-autoencoder. The spatial-autoencoder specializes in learning spatial features within input lesion images whereas the FFT-autoencoder learns to capture textural and distinguishing frequency patterns within transformed input skin lesion images through the reconstruction process. The use of attention modules at various levels within the encoder part of these autoencoders significantly improves their discriminative feature learning capabilities. An Extreme Learning Machine (ELM) with a single layer of feedforward is trained to classify skin malignancies using the characteristics that were recovered from the bottleneck layers of these autoencoders. The \'HAM10000\' and \'ISIC-2017\' are two publicly available datasets used to thoroughly assess the suggested approach. The experimental findings demonstrate the accuracy and robustness of the proposed technique, with AUC, precision, and accuracy values for the \'HAM10000\' dataset being 0.98, 97.68% and 97.66%, and for the \'ISIC-2017\' dataset being 0.95, 86.75% and 86.68%, respectively. This study highlights the possibility of the suggested approach for accurate detection of skin cancer.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    皮肤癌是常见的癌症类型之一。它传播迅速,在早期阶段不容易发现,对人类健康构成重大威胁。近年来,深度学习方法在皮肤镜图像中的皮肤癌检测中引起了广泛的关注。然而,由于皮肤病变图像中的类间相似性和类内变化,训练实用的分类器变得非常具有挑战性。为了解决这些问题,我们提出了一种结合浅层和深层特征的多尺度融合结构,以实现更准确的分类。同时,我们实现了三种方法来解决类不平衡的问题:类加权,标签平滑,和重新采样。此外,HAM10000_RE数据集剥离了头发特征,以证明头发特征在分类过程中的作用。我们证明了感兴趣的区域是HAM10000_SE数据集的最关键的分类特征,划分病变区域。我们使用HAM10000和ISIC2019数据集评估了我们模型的有效性。结果表明,该方法在皮肤分类任务中表现良好,ACC和AUC分别为94.0%和99.3%,在ISIC2019数据集中的HAM10000数据集和ACC为89.8%。与最先进的模型相比,我们模型的整体性能非常出色。
    Skin cancer is one of the common types of cancer. It spreads quickly and is not easy to detect in the early stages, posing a major threat to human health. In recent years, deep learning methods have attracted widespread attention for skin cancer detection in dermoscopic images. However, training a practical classifier becomes highly challenging due to inter-class similarity and intra-class variation in skin lesion images. To address these problems, we propose a multi-scale fusion structure that combines shallow and deep features for more accurate classification. Simultaneously, we implement three approaches to the problem of class imbalance: class weighting, label smoothing, and resampling. In addition, the HAM10000_RE dataset strips out hair features to demonstrate the role of hair features in the classification process. We demonstrate that the region of interest is the most critical classification feature for the HAM10000_SE dataset, which segments lesion regions. We evaluated the effectiveness of our model using the HAM10000 and ISIC2019 dataset. The results showed that this method performed well in dermoscopic classification tasks, with ACC and AUC of 94.0% and 99.3%, on the HAM10000 dataset and ACC of 89.8% for the ISIC2019 dataset. The overall performance of our model is excellent in comparison to state-of-the-art models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在本文中,我们提出了一种多任务学习(MTL)网络,基于标签级融合元数据和手工制作的特征,通过无监督聚类生成新的聚类标签作为优化目标。我们提出了一个MTL模块(MTLM),它包含了一种注意力机制,使模型能够学习更多的集成,可变信息。我们提出了一种动态策略来调整不同任务的损失权重,并权衡多个分支机构的贡献。而不是特征级融合,我们提出了标签级融合,并将我们提出的MTLM的结果与图像分类网络的结果相结合,以在多个皮肤病学数据集上实现更好的病变预测。我们通过定量和定性措施验证了该模型的有效性。使用多模态线索和标签级融合的MTL网络可以为皮肤病变分类产生显著的性能改进。
    In this paper, we propose a multi-task learning (MTL) network based on the label-level fusion of metadata and hand-crafted features by unsupervised clustering to generate new clustering labels as an optimization goal. We propose a MTL module (MTLM) that incorporates an attention mechanism to enable the model to learn more integrated, variable information. We propose a dynamic strategy to adjust the loss weights of different tasks, and trade off the contributions of multiple branches. Instead of feature-level fusion, we propose label-level fusion and combine the results of our proposed MTLM with the results of the image classification network to achieve better lesion prediction on multiple dermatological datasets. We verify the effectiveness of the proposed model by quantitative and qualitative measures. The MTL network using multi-modal clues and label-level fusion can yield the significant performance improvement for skin lesion classification.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    为医学图像分析构建有效的深度学习模型是一项复杂的任务,特别是在医学图像数据集缺乏显著的类间变化的情况下。当使用这样的数据集来使用生成对抗网络(GAN)生成合成图像时,这一挑战进一步加剧。因为GAN的输出在很大程度上依赖于输入数据。在这项研究中,我们提出了一种新颖的滤波算法,称为基于余弦相似度的图像滤波(CosSIF)。我们利用CosSIF开发了两种不同的过滤方法:GAN训练前过滤(FBGT)和GAN训练后过滤(FAGT)。FBGT涉及在将其用作GAN的训练数据集之前去除与其他类别的图像具有相似性的真实图像。另一方面,FAGT专注于消除与用于训练GAN的真实图像相比具有较少辨别特征的合成图像。实验结果表明,FAGT或FBGT方法的利用减少了临床图像分类数据集中的低类间变化,并使GAN能够生成具有更大区别特征的合成图像。此外,现代变压器和基于卷积的模型,用利用这些过滤方法的数据集进行训练,导致对多数阶层的偏见减少,对少数民族的样本进行更准确的预测,和整体更好的泛化能力。代码和实现详细信息可在以下网站获得:https://github.com/mominul-ssv/cossif。
    Crafting effective deep learning models for medical image analysis is a complex task, particularly in cases where the medical image dataset lacks significant inter-class variation. This challenge is further aggravated when employing such datasets to generate synthetic images using generative adversarial networks (GANs), as the output of GANs heavily relies on the input data. In this research, we propose a novel filtering algorithm called Cosine Similarity-based Image Filtering (CosSIF). We leverage CosSIF to develop two distinct filtering methods: Filtering Before GAN Training (FBGT) and Filtering After GAN Training (FAGT). FBGT involves the removal of real images that exhibit similarities to images of other classes before utilizing them as the training dataset for a GAN. On the other hand, FAGT focuses on eliminating synthetic images with less discriminative features compared to real images used for training the GAN. The experimental results reveal that the utilization of either the FAGT or FBGT method reduces low inter-class variation in clinical image classification datasets and enables GANs to generate synthetic images with greater discriminative features. Moreover, modern transformer and convolutional-based models, trained with datasets that utilize these filtering methods, lead to less bias toward the majority class, more accurate predictions of samples in the minority class, and overall better generalization capabilities. Code and implementation details are available at: https://github.com/mominul-ssv/cossif.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    皮肤癌,包括高度致命的恶性黑色素瘤,随着发病率的上升,对全球健康构成了重大挑战。早期检测在提高生存率方面起着关键作用。这项研究旨在开发一种先进的基于深度学习的方法,用于准确的皮肤病变分类,解决数据可用性有限等挑战,阶级不平衡,和噪音。现代深度神经网络设计,例如ResNeXt101、SeResNeXt101、ResNet152V2、DenseNet201、GoogLeNet、和Xception,它们在研究中使用,并使用SGD技术进行了优化。该数据集包括来自HAM10000和ISIC数据集的各种皮肤病变图像。噪音和伪影使用图像修复来解决,数据增强技术增强了训练样本的多样性。利用集成技术,创建平均和加权平均集成模型。网格搜索优化模型权重分布各个模型表现出不同的性能,包括召回在内的指标,精度,F1得分,和MCC。“平均集合模型”实现了和谐平衡,强调精度,F1得分,和回忆,产生高性能。“加权集合模型”利用了各个模型的优势,展示提高的精度和MCC,取得了出色的表现。合奏模型始终优于单个模型,平均集成模型的宏观平均ROC-AUC评分为96%,加权集成模型的宏观平均ROC-AUC评分为97%。这项研究证明了集成技术在显着提高皮肤病变分类准确性方面的功效。通过利用各个模型的优势并解决其局限性,集成模型在各种指标上表现出稳健和可靠的性能。这些发现强调了集成技术在增强医学诊断和改善患者皮肤病变诊断方面的潜力。
    Skin cancer, including the highly lethal malignant melanoma, poses a significant global health challenge with a rising incidence rate. Early detection plays a pivotal role in improving survival rates. This study aims to develop an advanced deep learning-based approach for accurate skin lesion classification, addressing challenges such as limited data availability, class imbalance, and noise. Modern deep neural network designs, such as ResNeXt101, SeResNeXt101, ResNet152V2, DenseNet201, GoogLeNet, and Xception, which are used in the study and ze optimised using the SGD technique. The dataset comprises diverse skin lesion images from the HAM10000 and ISIC datasets. Noise and artifacts are tackled using image inpainting, and data augmentation techniques enhance training sample diversity. The ensemble technique is utilized, creating both average and weighted average ensemble models. Grid search optimizes model weight distribution. The individual models exhibit varying performance, with metrics including recall, precision, F1 score, and MCC. The \"Average ensemble model\" achieves harmonious balance, emphasizing precision, F1 score, and recall, yielding high performance. The \"Weighted ensemble model\" capitalizes on individual models\' strengths, showcasing heightened precision and MCC, yielding outstanding performance. The ensemble models consistently outperform individual models, with the average ensemble model attaining a macro-average ROC-AUC score of 96 % and the weighted ensemble model achieving a macro-average ROC-AUC score of 97 %. This research demonstrates the efficacy of ensemble techniques in significantly improving skin lesion classification accuracy. By harnessing the strengths of individual models and addressing their limitations, the ensemble models exhibit robust and reliable performance across various metrics. The findings underscore the potential of ensemble techniques in enhancing medical diagnostics and contributing to improved patient outcomes in skin lesion diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本文介绍了一种增强基于内容的图像检索的新方法,在两个基准数据集上验证:ISIC-2017和ISIC-2018。这些数据集包括对皮肤癌诊断和治疗创新至关重要的皮肤病变图像。我们提倡使用预先训练的视觉转换器(ViT),图像检索领域中一个相对未知的概念,特别是在医疗场景中。与传统采用的卷积神经网络(CNN)相比,我们的发现表明ViT提供了对图像上下文的更全面的理解,在医学成像中是必不可少的。我们进一步纳入了加权多损失函数,深入研究各种损失,如三重态损失,蒸馏损失,对比损失,和交叉熵损失。我们的探索调查了这些损失的最具弹性的组合,以创建一个稳健的多损失函数,从而增强了学习特征空间的鲁棒性,提高了检索过程的查准率和查全率。而不是使用所有的损失函数,所提出的多损失函数仅利用交叉熵损失的组合,三重态损失,与ISIC-2017和ISIC-2018相比,蒸馏损失和收益的平均精度分别提高了6.52%和3.45%。我们方法的另一个创新是双分支网络战略,这同时促进了图像检索和分类。通过我们的实验,我们强调了图像检索中各种丢失配置的有效性和陷阱。此外,我们的方法强调了通过多数投票而不是仅仅依赖分类头的基于检索的分类的优势,导致对黑色素瘤的预测增强-最致命的皮肤癌类型。我们的结果超过了ISIC-2017和ISIC-2018数据集的现有最新技术,平均精度分别提高了1.01%和4.36%,强调视觉变形金刚与我们量身定制的加权损失函数的功效和前景,尤其是在医疗环境中。通过全面的消融研究以及一系列定量和定性结果,证实了所提出方法的有效性。为了促进可重复性和支持即将进行的研究,我们的源代码将在GitHub上访问。
    This paper introduces a novel approach to enhance content-based image retrieval, validated on two benchmark datasets: ISIC-2017 and ISIC-2018. These datasets comprise skin lesion images that are crucial for innovations in skin cancer diagnosis and treatment. We advocate the use of pre-trained Vision Transformer (ViT), a relatively uncharted concept in the realm of image retrieval, particularly in medical scenarios. In contrast to the traditionally employed Convolutional Neural Networks (CNNs), our findings suggest that ViT offers a more comprehensive understanding of the image context, essential in medical imaging. We further incorporate a weighted multi-loss function, delving into various losses such as triplet loss, distillation loss, contrastive loss, and cross-entropy loss. Our exploration investigates the most resilient combination of these losses to create a robust multi-loss function, thus enhancing the robustness of the learned feature space and ameliorating the precision and recall in the retrieval process. Instead of using all the loss functions, the proposed multi-loss function utilizes the combination of only cross-entropy loss, triplet loss, and distillation loss and gains improvement of 6.52% and 3.45% for mean average precision over ISIC-2017 and ISIC-2018. Another innovation in our methodology is a two-branch network strategy, which concurrently boosts image retrieval and classification. Through our experiments, we underscore the effectiveness and the pitfalls of diverse loss configurations in image retrieval. Furthermore, our approach underlines the advantages of retrieval-based classification through majority voting rather than relying solely on the classification head, leading to enhanced prediction for melanoma - the most lethal type of skin cancer. Our results surpass existing state-of-the-art techniques on the ISIC-2017 and ISIC-2018 datasets by improving mean average precision by 1.01% and 4.36% respectively, emphasizing the efficacy and promise of Vision Transformers paired with our tailor-made weighted loss function, especially in medical contexts. The proposed approach\'s effectiveness is substantiated through thorough ablation studies and an array of quantitative and qualitative outcomes. To promote reproducibility and support forthcoming research, our source code will be accessible on GitHub.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    尽管过去的研究表明视觉学习可能受益于概念知识,当前医学图像评估的干预措施通常集中在程序知识上,主要通过教学分类算法。我们比较了纯程序知识(用于评估皮肤病变的三点检查表)与组合程序加概念知识(三个点中每个点的组织学解释)的功效。然后,所有学生都使用视觉学习资源训练他们的分类技能,其中包括两种类型的色素性皮肤病变的图像:良性痣和恶性黑色素瘤。两种治疗方法都对转移任务的诊断准确性产生了显着且持久的影响。然而,只有程序和概念知识相结合的学生在对之前在测试前和测试后看到的病变进行分类时,他们的诊断能力得到了显着提高。研究结果表明,提供额外的概念知识支持纠错机制。
    Even though past research suggests that visual learning may benefit from conceptual knowledge, current interventions for medical image evaluation often focus on procedural knowledge, mainly by teaching classification algorithms. We compared the efficacy of pure procedural knowledge (three-point checklist for evaluating skin lesions) versus combined procedural plus conceptual knowledge (histological explanations for each of the three points). All students then trained their classification skills with a visual learning resource that included images of two types of pigmented skin lesions: benign nevi and malignant melanomas. Both treatments produced significant and long-lasting effects on diagnostic accuracy in transfer tasks. However, only students in the combined procedural plus conceptual knowledge condition significantly improved their diagnostic performance in classifying lesions they had seen before in the pre- and post-tests. Findings suggest that the provision of additional conceptual knowledge supported error correction mechanisms.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Comparative Study
    在皮肤科,深度学习可用于皮肤病变分类。然而,对于给定的输入图像,神经网络只输出一个标签,使用类概率获得,不建模不确定性。我们小组开发了一种新的方法来量化随机神经网络中的不确定性。在这项研究中,我们的目标是训练这样的网络进行皮肤损伤分类,并评估其诊断性能和不确定性,并将结果与一组皮肤科医生的评估结果进行比较。通过这样的随机神经网络传递图像的副本,我们获得了每个类的分布,而不是单个概率值。我们将这些分布之间的重叠解释为输出不确定性,高度重叠表示高度不确定性,反之亦然。我们有29名皮肤科医生诊断了一系列皮肤病变,并对他们的信心进行了评分。我们将这些结果与网络的结果进行了比较。该网络实现了50%和88%的灵敏度和特异性,与普通皮肤科医生相当(分别为68%和73%)。较高的置信度/较低的不确定性与神经网络和皮肤科医生的更好的诊断性能相关。我们发现神经网络的不确定性与皮肤科医生的置信度之间没有相关性(R=-0.06,p=0.77)。皮肤科医生不应该盲目地相信神经网络的输出,尤其是当它的不确定性很高时。不确定性分数的添加可以刺激人机交互。
    In dermatology, deep learning may be applied for skin lesion classification. However, for a given input image, a neural network only outputs a label, obtained using the class probabilities, which do not model uncertainty. Our group developed a novel method to quantify uncertainty in stochastic neural networks. In this study, we aimed to train such network for skin lesion classification and evaluate its diagnostic performance and uncertainty, and compare the results to the assessments by a group of dermatologists. By passing duplicates of an image through such a stochastic neural network, we obtained distributions per class, rather than a single probability value. We interpreted the overlap between these distributions as the output uncertainty, where a high overlap indicated a high uncertainty, and vice versa. We had 29 dermatologists diagnose a series of skin lesions and rate their confidence. We compared these results to those of the network. The network achieved a sensitivity and specificity of 50% and 88%, comparable to the average dermatologist (respectively 68% and 73%). Higher confidence/less uncertainty was associated with better diagnostic performance both in the neural network and in dermatologists. We found no correlation between the uncertainty of the neural network and the confidence of dermatologists (R = -0.06, p = 0.77). Dermatologists should not blindly trust the output of a neural network, especially when its uncertainty is high. The addition of an uncertainty score may stimulate the human-computer interaction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    多模态数据的融合,例如,医学图像和基因组概况,可以提供补充信息,进一步有利于疾病诊断。然而,多模态疾病诊断面临两个挑战:(1)如何通过利用互补信息同时避免来自不同模态的噪声特征来产生有区别的多模态表示。(2)在真实临床场景中,当只有单一模态可用时,如何获得准确的诊断。为了解决这两个问题,我们提出了一个两阶段的疾病诊断框架.在第一个多模态学习阶段,我们提出了一种新的富含动量的多模态低秩(M3LR)约束来探索不同模态之间的高阶相关性和互补信息,从而产生更准确的多模态诊断。在第二阶段,多模式教师的特权知识通过我们提出的差异监督对比蒸馏(DSCD)和梯度指导知识调制(GKM)模块转移给单峰学生,这有利于基于单峰的诊断。我们在两个任务上验证了我们的方法:(i)基于病理幻灯片和基因组数据的神经胶质瘤分级,和(ii)基于皮肤镜和临床图像的皮肤病变分类。两项任务的实验结果表明,我们提出的方法在多模态和单峰诊断中始终优于现有方法。
    The fusion of multi-modal data, e.g., medical images and genomic profiles, can provide complementary information and further benefit disease diagnosis. However, multi-modal disease diagnosis confronts two challenges: (1) how to produce discriminative multi-modal representations by exploiting complementary information while avoiding noisy features from different modalities. (2) how to obtain an accurate diagnosis when only a single modality is available in real clinical scenarios. To tackle these two issues, we present a two-stage disease diagnostic framework. In the first multi-modal learning stage, we propose a novel Momentum-enriched Multi-Modal Low-Rank (M3LR) constraint to explore the high-order correlations and complementary information among different modalities, thus yielding more accurate multi-modal diagnosis. In the second stage, the privileged knowledge of the multi-modal teacher is transferred to the unimodal student via our proposed Discrepancy Supervised Contrastive Distillation (DSCD) and Gradient-guided Knowledge Modulation (GKM) modules, which benefit the unimodal-based diagnosis. We have validated our approach on two tasks: (i) glioma grading based on pathology slides and genomic data, and (ii) skin lesion classification based on dermoscopy and clinical images. Experimental results on both tasks demonstrate that our proposed method consistently outperforms existing approaches in both multi-modal and unimodal diagnoses.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号