Multiple instance learning

多实例学习
  • 文章类型: Journal Article
    在计算病理学领域中,基于多实例学习(MIL)的方法已经被广泛采用来处理整个幻灯片图像(WSI)。由于幻灯片级监督稀疏,这些方法通常在肿瘤区域缺乏良好的定位,导致可解释性差。此外,它们缺乏对预测结果的稳健不确定性估计,导致可靠性差。为了解决上述两个限制,我们提出了一个可解释和证据的多实例学习(E2-MIL)框架,用于整个幻灯片图像分类。E2-MIL主要由三个模块组成:细节感知注意蒸馏模块(DAM),结构感知注意力细化模块(SRM),和不确定性感知实例分类器(UIC)。具体来说,DAM通过利用互补的子袋从本地网络中学习详细的注意力知识,帮助全球网络找到更多细节感知的正面实例。此外,还引入了屏蔽的自指导损失,以帮助弥合幻灯片级别标签和实例级别分类任务之间的差距。SRM生成结构感知注意力图,其通过有效地对聚类实例之间的空间关系建模来定位整个肿瘤区域结构。此外,UIC提供准确的实例级分类结果和稳健的预测不确定性估计,以提高基于主观逻辑理论的模型可靠性。在三个大型多中心子类型数据集上进行的大量实验证明了E2-MIL的幻灯片级和实例级性能优势。
    Multiple instance learning (MIL)-based methods have been widely adopted to process the whole slide image (WSI) in the field of computational pathology. Due to the sparse slide-level supervision, these methods usually lack good localization on the tumor regions, leading to poor interpretability. Moreover, they lack robust uncertainty estimation of prediction results, leading to poor reliability. To solve the above two limitations, we propose an explainable and evidential multiple instance learning (E2-MIL) framework for whole slide image classification. E2-MIL is mainly composed of three modules: a detail-aware attention distillation module (DAM), a structure-aware attention refined module (SRM), and an uncertainty-aware instance classifier (UIC). Specifically, DAM helps the global network locate more detail-aware positive instances by utilizing the complementary sub-bags to learn detailed attention knowledge from the local network. In addition, a masked self-guidance loss is also introduced to help bridge the gap between the slide-level labels and instance-level classification tasks. SRM generates a structure-aware attention map that locates the entire tumor region structure by effectively modeling the spatial relations between clustering instances. Moreover, UIC provides accurate instance-level classification results and robust predictive uncertainty estimation to improve the model reliability based on subjective logic theory. Extensive experiments on three large multi-center subtyping datasets demonstrate both slide-level and instance-level performance superiority of E2-MIL.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:这项工作的目的是开发一种新颖的AI辅助体内剂量测定(IVD)方法,该方法使用时间分辨的剂量验证数据来提高外部束放射治疗的质量。 方法。尽管阈值分类方法通常用于错误分类,由于将多维电子射野成像设备(EPID)数据压缩为一个或几个数字而导致信息丢失,因此它们可能会导致丢失错误。最近的研究已经调查了活体EPID图像中时间积分(TI)的错误分类,卷积神经网络显示出希望。然而,以前已经观察到,TI方法可以抵消在动态治疗期间γ-图上的误差存在。为了解决这个限制,每个VMAT角度的模拟时间分辨(TR)γ图用于检测由复杂的患者几何结构和束布置引起的治疗误差。通常,这样的图像可以被解释为仅提供集合类标签的一组段。受最近对组织病理学图像的弱监督方法的启发,我们实现了基于变压器的多实例学习(MIL)方法,并利用了从TI到TRγ图的迁移学习。 主要结果。该算法在误差类型和误差大小的分类上表现良好。对于11类(错误类型)和22类(错误幅度)的治疗错误,测试集的准确性分别达到0.94和0.81,分别。 意义。TR剂量分布可以增强治疗交付决策,然而,由于这些数据的复杂性和数量,手动数据分析几乎是不可能的。我们提出的模型有效地处理数据复杂性,与利用TI数据的模型相比,大幅改进了处理错误分类。 .
    Objective.The aim of this work was to develop a novel artificial intelligence-assistedin vivodosimetry method using time-resolved (TR) dose verification data to improve quality of external beam radiotherapy.Approach. Although threshold classification methods are commonly used in error classification, they may lead to missing errors due to the loss of information resulting from the compression of multi-dimensional electronic portal imaging device (EPID) data into one or a few numbers. Recent research has investigated the classification of errors on time-integrated (TI)in vivoEPID images, with convolutional neural networks showing promise. However, it has been observed previously that TI approaches may cancel out the error presence onγ-maps during dynamic treatments. To address this limitation, simulated TRγ-maps for each volumetric modulated arc radiotherapy angle were used to detect treatment errors caused by complex patient geometries and beam arrangements. Typically, such images can be interpreted as a set of segments where only set class labels are provided. Inspired by recent weakly supervised approaches on histopathology images, we implemented a transformer based multiple instance learning approach and utilized transfer learning from TI to TRγ-maps.Main results. The proposed algorithm performed well on classification of error type and error magnitude. The accuracy in the test set was up to 0.94 and 0.81 for 11 (error type) and 22 (error magnitude) classes of treatment errors, respectively.Significance. TR dose distributions can enhance treatment delivery decision-making, however manual data analysis is nearly impossible due to the complexity and quantity of this data. Our proposed model efficiently handles data complexity, substantially improving treatment error classification compared to models that leverage TI data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    基于组织病理学图像的生存预测旨在提供癌症预后的精确评估,并且可以告知个性化的治疗决策,以改善患者的预后。然而,现有方法无法自动对每个整张幻灯片图像(WSI)中的许多形态多样的小块之间的复杂相关性进行建模,从而阻止他们对患者状况有更深刻的理解和推断。为了解决这个问题,在这里,我们提出了一个新的深度学习框架,称为双流多依赖图神经网络(DM-GNN),以实现精确的癌症患者生存分析。具体来说,DM-GNN具有特征更新和全局分析分支,可以基于形态亲和力和全局共激活依赖性将每个WSI更好地建模为两个图。由于这两个依赖性从不同但互补的角度描绘了每个WSI,DM-GNN的两个设计分支可以共同实现补丁之间复杂相关性的多视图建模。此外,DM-GNN还能够通过引入亲和性引导注意力重新校准模块作为读出功能来提高图形构造期间依赖性信息的利用。这个新颖的模块提供了对特征扰动的增强的鲁棒性,从而确保更可靠和稳定的预测。在五个TCGA数据集上进行的广泛基准测试实验表明,DM-GNN优于其他最先进的方法,并基于高注意力补丁的形态学描述提供了可解释的预测见解。总的来说,DM-GNN代表了从组织病理学图像中个性化癌症预后的强大辅助工具,并且具有帮助临床医生做出个性化治疗决策和改善患者预后的巨大潜力。
    Histopathology image-based survival prediction aims to provide a precise assessment of cancer prognosis and can inform personalized treatment decision-making in order to improve patient outcomes. However, existing methods cannot automatically model the complex correlations between numerous morphologically diverse patches in each whole slide image (WSI), thereby preventing them from achieving a more profound understanding and inference of the patient status. To address this, here we propose a novel deep learning framework, termed dual-stream multi-dependency graph neural network (DM-GNN), to enable precise cancer patient survival analysis. Specifically, DM-GNN is structured with the feature updating and global analysis branches to better model each WSI as two graphs based on morphological affinity and global co-activating dependencies. As these two dependencies depict each WSI from distinct but complementary perspectives, the two designed branches of DM-GNN can jointly achieve the multi-view modeling of complex correlations between the patches. Moreover, DM-GNN is also capable of boosting the utilization of dependency information during graph construction by introducing the affinity-guided attention recalibration module as the readout function. This novel module offers increased robustness against feature perturbation, thereby ensuring more reliable and stable predictions. Extensive benchmarking experiments on five TCGA datasets demonstrate that DM-GNN outperforms other state-of-the-art methods and offers interpretable prediction insights based on the morphological depiction of high-attention patches. Overall, DM-GNN represents a powerful and auxiliary tool for personalized cancer prognosis from histopathology images and has great potential to assist clinicians in making personalized treatment decisions and improving patient outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • DOI:
    文章类型: Journal Article
    整个幻灯片图像(WSI),通过高分辨率数字扫描显微镜载玻片在多个尺度上获得,是现代数字病理学的基石。然而,它们代表了基于AI/AI介导的分析的特殊挑战,因为病理标记通常在幻灯片级别完成,而不是平铺级。不仅仅是医学诊断记录在样本级别,癌基因突变的检测也是通过实验获得的,并由癌症基因组图谱(TCGA)等计划记录,在幻灯片级别。这构成了双重挑战:a)准确预测总体癌症表型和b)找出在平铺水平上与其相关的细胞形态。为了应对这些挑战,针对两种流行的癌症类型,探索了一种弱监督多实例学习(MIL)方法,浸润性乳腺癌(TCGA-BRCA)和肺鳞癌(TCGA-LUSC)。探索了这种方法用于低放大倍数水平的肿瘤检测和各种水平的TP53突变。我们的结果表明,MIL的新型附加实现与参考实现的性能相匹配(AUC0.96),并且仅略微优于注意MIL(AUC0.97)。更有趣的是,从分子病理学家的角度来看,这些不同的人工智能架构识别出对形态特征的不同敏感性(通过检测感兴趣的区域,不同扩增水平的RoI)。很明显,TP53突变对细胞形态得以解决的较高应用中的特征最敏感。
    Whole Slide Images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology. However, they represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level. It is not just that medical diagnostics is recorded at the specimen level, the detection of oncogene mutation is also experimentally obtained, and recorded by initiatives like The Cancer Genome Atlas (TCGA), at the slide level. This configures a dual challenge: a) accurately predicting the overall cancer phenotype and b) finding out what cellular morphologies are associated with it at the tile level. To address these challenges, a weakly supervised Multiple Instance Learning (MIL) approach was explored for two prevalent cancer types, Invasive Breast Carcinoma (TCGA-BRCA) and Lung Squamous Cell Carcinoma (TCGA-LUSC). This approach was explored for tumor detection at low magnification levels and TP53 mutations at various levels. Our results show that a novel additive implementation of MIL matched the performance of reference implementation (AUC 0.96), and was only slightly outperformed by Attention MIL (AUC 0.97). More interestingly from the perspective of the molecular pathologist, these different AI architectures identify distinct sensitivities to morphological features (through the detection of Regions of Interest, RoI) at different amplification levels. Tellingly, TP53 mutation was most sensitive to features at the higher applications where cellular morphology is resolved.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:数字整片图像(WSI)的出现推动了计算病理学的发展。然而,由于WSI的高分辨率,获得补丁级别的注释是具有挑战性和耗时的,这限制了完全监督方法的适用性。我们的目标是解决与补丁级别注释相关的挑战。
    方法:我们提出了一种基于多实例学习(MIL)的弱监督WSI分析的通用框架。为了实现实例功能的有效聚合,我们通过考虑特征分布,从多个维度设计了一个特征聚合模块,实例相关性和实例级评估。首先,我们实现了实例级标准化层和深度投影单元,以改善特征空间中实例的分离。然后,采用自我注意机制来探索实例之间的依赖关系。此外,引入了一种实例级伪标签评估方法,以增强弱监督过程中的可用信息。最后,使用袋级分类器来获得初步的WSI分类结果。为了实现更准确的WSI标签预测,我们设计了一个关键实例选择模块,加强了实例本地特征的学习。组合来自两个模块的结果导致WSI预测准确性的提高。
    结果:对Camelyon16,TCGA-NSCLC,SICAPv2,PANDA和经典的MIL基准测试数据集表明,与一些最近的方法相比,我们提出的方法具有竞争力。在分类精度方面最大提高14.6%。
    结论:我们的方法可以以弱监督的方式提高整个幻灯片图像的分类精度,更准确地检测病变区域。
    BACKGROUND: The emergence of digital whole slide image (WSI) has driven the development of computational pathology. However, obtaining patch-level annotations is challenging and time-consuming due to the high resolution of WSI, which limits the applicability of fully supervised methods. We aim to address the challenges related to patch-level annotations.
    METHODS: We propose a universal framework for weakly supervised WSI analysis based on Multiple Instance Learning (MIL). To achieve effective aggregation of instance features, we design a feature aggregation module from multiple dimensions by considering feature distribution, instances correlation and instance-level evaluation. First, we implement instance-level standardization layer and deep projection unit to improve the separation of instances in the feature space. Then, a self-attention mechanism is employed to explore dependencies between instances. Additionally, an instance-level pseudo-label evaluation method is introduced to enhance the available information during the weak supervision process. Finally, a bag-level classifier is used to obtain preliminary WSI classification results. To achieve even more accurate WSI label predictions, we have designed a key instance selection module that strengthens the learning of local features for instances. Combining the results from both modules leads to an improvement in WSI prediction accuracy.
    RESULTS: Experiments conducted on Camelyon16, TCGA-NSCLC, SICAPv2, PANDA and classical MIL benchmark datasets demonstrate that our proposed method achieves a competitive performance compared to some recent methods, with maximum improvement of 14.6 % in terms of classification accuracy.
    CONCLUSIONS: Our method can improve the classification accuracy of whole slide images in a weakly supervised way, and more accurately detect lesion areas.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    图卷积神经网络在自然和组织病理学图像中显示出巨大的潜力。然而,它们的使用只在单放大或多放大与均匀的图形或仅不同的节点类型进行了研究。为了利用多放大倍数信息并改善图卷积网络的消息传递,我们通过引入多尺度关系图卷积网络(MS-RGCN)作为多实例学习方法,在每次放大处理不同的嵌入空间。我们对组织病理学图像斑块及其与相邻斑块和其他尺度上的斑块的关系进行建模(即,放大倍数)作为图形。我们根据节点和边缘类型定义单独的消息传递神经网络,以在不同的放大嵌入空间之间传递信息。我们对前列腺癌组织病理学图像进行实验,以根据从补丁中提取的特征来预测等级组。我们还将MS-RGCN与多种最新方法进行了比较,并对多个来源和保留数据集进行了评估。我们的方法在由组织微阵列组成的所有数据集和图像类型上都优于最先进的技术,整个安装幻灯片区域,和整个幻灯片图像。通过消融研究,我们测试并显示MS-RGCN的相关设计功能的价值。
    Graph convolutional neural networks have shown significant potential in natural and histopathology images. However, their use has only been studied in a single magnification or multi-magnification with either homogeneous graphs or only different node types. In order to leverage the multi-magnification information and improve message passing with graph convolutional networks, we handle different embedding spaces at each magnification by introducing the Multi-Scale Relational Graph Convolutional Network (MS-RGCN) as a multiple instance learning method. We model histopathology image patches and their relation with neighboring patches and patches at other scales (i.e., magnifications) as a graph. We define separate message-passing neural networks based on node and edge types to pass the information between different magnification embedding spaces. We experiment on prostate cancer histopathology images to predict the grade groups based on the extracted features from patches. We also compare our MS-RGCN with multiple state-of-the-art methods with evaluations on several source and held-out datasets. Our method outperforms the state-of-the-art on all of the datasets and image types consisting of tissue microarrays, whole-mount slide regions, and whole-slide images. Through an ablation study, we test and show the value of the pertinent design features of the MS-RGCN.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    结论:我们提出了一个分层框架,包括无监督的候选图像选择和基于多实例学习(MIL)的弱监督块图像检测,以有效地估计整个幻灯片图像的组织样本中的嗜酸性粒细胞数量。MIL是一种创新的方法,可以帮助处理细胞分布检测中的变异性,并能够以高精度从鼻窦组织病理学图像中自动定量嗜酸性粒细胞。该研究为自动化组织病理学图像分析领域的进一步研究和发展奠定了基础,和验证更广泛和多样化的数据集将有助于现实世界的应用。
    CONCLUSIONS: We proposed a hierarchical framework including an unsupervised candidate image selection and a weakly supervised patch image detection based on multiple instance learning (MIL) to effectively estimate eosinophil quantities in tissue samples from whole slide images. MIL is an innovative approach that can help deal with the variability in cell distribution detection and enable automated eosinophil quantification from sinonasal histopathological images with a high degree of accuracy. The study lays the foundation for further research and development in the field of automated histopathological image analysis, and validation on more extensive and diverse datasets will contribute to real-world application.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    结直肠癌(CRC)是美国第三大最常见的癌症。肿瘤出芽(TB)检测和定量是通过组织病理学图像分析确定CRC阶段的关键但劳动密集型步骤。为了帮助这个过程,我们使用SAM-Adapter对CRC组织病理学图像上的任意段模型(SAM)进行调整以分割TB。在这种方法中,我们会自动从CRC图像中获取特定于任务的提示,并以参数有效的方式训练SAM模型。我们使用病理学家的注释将模型的预测与从零开始训练的模型的预测进行比较。因此,我们的模型实现了0.65的交集联合(IoU)和0.75的实例级Dice评分,这在匹配病理学家的TB注释方面是有希望的。我们相信我们的研究提供了一种新的解决方案来识别H&E染色的组织病理学图像上的TBs。我们的研究还证明了将基础模型用于病理图像分割任务的价值。
    Colorectal cancer (CRC) is the third most common cancer in the United States. Tumor Budding (TB) detection and quantification are crucial yet labor-intensive steps in determining the CRC stage through the analysis of histopathology images. To help with this process, we adapt the Segment Anything Model (SAM) on the CRC histopathology images to segment TBs using SAM-Adapter. In this approach, we automatically take task-specific prompts from CRC images and train the SAM model in a parameter-efficient way. We compare the predictions of our model with the predictions from a trained-from-scratch model using the annotations from a pathologist. As a result, our model achieves an intersection over union (IoU) of 0.65 and an instance-level Dice score of 0.75, which are promising in matching the pathologist\'s TB annotation. We believe our study offers a novel solution to identify TBs on H&E-stained histopathology images. Our study also demonstrates the value of adapting the foundation model for pathology image segmentation tasks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    整个幻灯片图像(WSI)分析,由深度学习算法驱动,有可能彻底改变肿瘤检测,分类,和治疗反应预测。然而,挑战依然存在,例如在各种癌症类型中有限的模型泛化性,补丁级注释的劳动密集型性质,以及整合多倍放大信息以全面了解病理模式的必要性。
    为了应对这些挑战,我们介绍MAMILNet,用于WSI分析的创新多尺度注意多实例学习框架。将注意力机制纳入MAMILNet有助于其在不同癌症类型和预测任务中的非凡普遍性。此模型将整个幻灯片视为“bags”,将单个修补程序视为“实例”。“通过采用这种方法,MAMILNet有效地消除了复杂的补丁级别标签的要求,显著减少病理学家的人工工作量。为了提高预测准确性,该模型采用了多尺度的“咨询”策略,促进从各种放大倍数汇总测试结果。
    我们对MAMILNet的评估包括1171例,包括多种癌症类型,展示其在预测复杂任务方面的有效性。值得注意的是,MAMILNet在不同领域取得了令人印象深刻的成果:用于乳腺癌肿瘤检测,曲线下面积(AUC)为0.8872,准确度为0.8760。在肺癌分型诊断领域,它实现了0.9551的AUC和0.9095的准确性。此外,在预测卵巢癌的药物治疗反应方面,MAMILNet实现了0.7358的AUC和0.7341的准确性。
    这项研究的结果强调了MAMILNet在推动肿瘤领域内精准医学和个性化治疗计划的发展方面的潜力。通过有效解决与模型泛化相关的挑战,注释工作负载,和多放大倍数集成,MAMILNet在增强癌症患者的医疗保健结果方面显示出希望。该框架在准确检测乳腺肿瘤方面的成功,诊断肺癌类型,预测卵巢癌治疗反应突出了其对该领域的重大贡献,并为改善患者护理铺平了道路。
    UNASSIGNED: Whole Slide Image (WSI) analysis, driven by deep learning algorithms, has the potential to revolutionize tumor detection, classification, and treatment response prediction. However, challenges persist, such as limited model generalizability across various cancer types, the labor-intensive nature of patch-level annotation, and the necessity of integrating multi-magnification information to attain a comprehensive understanding of pathological patterns.
    UNASSIGNED: In response to these challenges, we introduce MAMILNet, an innovative multi-scale attentional multi-instance learning framework for WSI analysis. The incorporation of attention mechanisms into MAMILNet contributes to its exceptional generalizability across diverse cancer types and prediction tasks. This model considers whole slides as \"bags\" and individual patches as \"instances.\" By adopting this approach, MAMILNet effectively eliminates the requirement for intricate patch-level labeling, significantly reducing the manual workload for pathologists. To enhance prediction accuracy, the model employs a multi-scale \"consultation\" strategy, facilitating the aggregation of test outcomes from various magnifications.
    UNASSIGNED: Our assessment of MAMILNet encompasses 1171 cases encompassing a wide range of cancer types, showcasing its effectiveness in predicting complex tasks. Remarkably, MAMILNet achieved impressive results in distinct domains: for breast cancer tumor detection, the Area Under the Curve (AUC) was 0.8872, with an Accuracy of 0.8760. In the realm of lung cancer typing diagnosis, it achieved an AUC of 0.9551 and an Accuracy of 0.9095. Furthermore, in predicting drug therapy responses for ovarian cancer, MAMILNet achieved an AUC of 0.7358 and an Accuracy of 0.7341.
    UNASSIGNED: The outcomes of this study underscore the potential of MAMILNet in driving the advancement of precision medicine and individualized treatment planning within the field of oncology. By effectively addressing challenges related to model generalization, annotation workload, and multi-magnification integration, MAMILNet shows promise in enhancing healthcare outcomes for cancer patients. The framework\'s success in accurately detecting breast tumors, diagnosing lung cancer types, and predicting ovarian cancer therapy responses highlights its significant contribution to the field and paves the way for improved patient care.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能(AI)代理在使用新的数据批次进行顺序训练时会遇到灾难性的遗忘问题。这个问题阻碍了在涉及持续发展的任务中实施基于AI的模型,比如癌症预测。此外,整个幻灯片图像(WSI)在癌症管理中起着至关重要的作用,他们的自动分析在诊断过程中协助病理学家变得越来越受欢迎。增量学习(IL)技术旨在开发能够保留先前获取的信息的算法,同时还获得新的见解以预测未来的数据。深度IL技术需要解决WSI千兆像素规模带来的挑战,这通常需要使用多实例学习(MIL)框架。在本文中,我们介绍了一种专门用于分析MIL范式中WSI的IL算法。提出的多实例类增量学习(MICIL)算法首次将MIL与类IL相结合,允许在IL类方案中增量预测来自WSI的多种皮肤癌亚型。我们的框架结合了知识提炼和数据演练,随着一种新颖的嵌入级蒸馏,旨在保持聚合WSI水平的潜在空间。结果表明,该算法在解决平衡IL特定指标的挑战方面是有效的,比如不妥协和遗忘,解决了可塑性-稳定性的困境。
    Artificial intelligence (AI) agents encounter the problem of catastrophic forgetting when they are trained in sequentially with new data batches. This issue poses a barrier to the implementation of AI-based models in tasks that involve ongoing evolution, such as cancer prediction. Moreover, whole slide images (WSI) play a crucial role in cancer management, and their automated analysis has become increasingly popular in assisting pathologists during the diagnosis process. Incremental learning (IL) techniques aim to develop algorithms capable of retaining previously acquired information while also acquiring new insights to predict future data. Deep IL techniques need to address the challenges posed by the gigapixel scale of WSIs, which often necessitates the use of multiple instance learning (MIL) frameworks. In this paper, we introduce an IL algorithm tailored for analyzing WSIs within a MIL paradigm. The proposed Multiple Instance Class-Incremental Learning (MICIL) algorithm combines MIL with class-IL for the first time, allowing for the incremental prediction of multiple skin cancer subtypes from WSIs within a class-IL scenario. Our framework incorporates knowledge distillation and data rehearsal, along with a novel embedding-level distillation, aiming to preserve the latent space at the aggregated WSI level. Results demonstrate the algorithm\'s effectiveness in addressing the challenge of balancing IL-specific metrics, such as intransigence and forgetting, and solving the plasticity-stability dilemma.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号