Multiple instance learning

多实例学习
  • 文章类型: Journal Article
    通过计算机辅助诊断的自动宫颈癌筛查已显示出改善筛查可及性并减少相关成本和错误的巨大潜力。然而,由于患者特定的变化,整个幻灯片图像(WSI)的分类性能仍然次优。为了提高筛选的精度,病理学家不仅分析可疑异常细胞的特征,还可以将它们与正常细胞进行比较。受这种做法的激励,我们提出了一种新的宫颈细胞比较学习方法,该方法利用病理学家的知识来学习同一WSI中正常细胞和可疑异常细胞之间的差异。我们的方法采用两个预先训练的YOLOX模型来检测给定WSI中的可疑异常和正常细胞。然后,自监督模型提取检测到的细胞的特征。随后,定制的Transformer编码器将单元特征融合以获得WSI实例嵌入。最后,基于注意力的多实例学习实现分类。实验结果表明,我们提出的方法的AUC为0.9319。此外,该方法达到了专业病理学家水平的性能,表明其临床应用的潜力。
    Automated cervical cancer screening through computer-assisted diagnosis has shown considerable potential to improve screening accessibility and reduce associated costs and errors. However, classification performance on whole slide images (WSIs) remains suboptimal due to patient-specific variations. To improve the precision of the screening, pathologists not only analyze the characteristics of suspected abnormal cells, but also compare them with normal cells. Motivated by this practice, we propose a novel cervical cell comparative learning method that leverages pathologist knowledge to learn the differences between normal and suspected abnormal cells within the same WSI. Our method employs two pre-trained YOLOX models to detect suspected abnormal and normal cells in a given WSI. A self-supervised model then extracts features for the detected cells. Subsequently, a tailored Transformer encoder fuses the cell features to obtain WSI instance embeddings. Finally, attention-based multi-instance learning is applied to achieve classification. The experimental results show an AUC of 0.9319 for our proposed method. Moreover, the method achieved professional pathologist-level performance, indicating its potential for clinical applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的慢性血栓栓塞性肺动脉高压(CTEPH)的早期非特异性症状诊断具有挑战性。复杂的诊断过程,和小的病变大小。本研究旨在开发一种使用非对比计算机断层扫描(NCCT)扫描的CTEPH自动诊断方法,无需精确的病变注释即可实现自动诊断。
方法开发了一种具有多实例学习(CNMIL)框架的新型级联网络,以改善CTEPH的诊断。该方法使用结合两个Resnet-18CNN网络的级联网络架构来逐步区分正常情况和CTEPH情况。多实例学习(MIL)用于将每个3DCT病例视为图像切片的“袋子”,使用注意力评分来识别最重要的切片。注意模块帮助模型专注于每个切片内的诊断相关区域。数据集包括来自300名受试者的NCCT扫描,包括117名男性和183名女性,平均年龄为52.5±20.9岁,包括132例正常病例和168例肺部疾病,包括88例CTEPH。CNMIL框架使用灵敏度进行了评估,特异性,和曲线下面积(AUC)指标,并与常见的3D监督分类网络和现有的CTEPH自动诊断网络进行了比较。 主要结果CNMIL框架显示出高诊断性能,在区分CTEPH病例时,AUC为0.807,准确性为0.833,敏感性为0.795,特异性为0.849。消融研究表明,集成MIL和级联网络显着增强了性能,该模型在正常分类中达到0.993的AUC和完美的灵敏度(1.000)。与其他3D网络体系结构的比较证实,集成模型优于其他模型,达到0.8419的最高AUC。 意义CNMIL网络不需要额外的扫描或注释,完全依靠NCCT。这种方法可以提高CTEPH检测的及时性和准确性,导致更好的患者结果。
    ObjectiveThe diagnosis of chronic thromboembolic pulmonary hypertension (CTEPH) is challenging due to nonspecific early symptoms, complex diagnostic processes, and small lesion sizes. This study aims to develop an automatic diagnosis method for CTEPH using non-contrasted computed tomography (NCCT) scans, enabling automated diagnosis without precise lesion annotation. ApproachA novel Cascade Network with Multiple Instance Learning (CNMIL) framework was developed to improve the diagnosis of CTEPH. This method uses a cascade network architecture combining two Resnet-18 CNN networks to progressively distinguish between normal and CTEPH cases. Multiple Instance Learning (MIL) is employed to treat each 3D CT case as a \"bag\" of image slices, using attention scoring to identify the most important slices. An attention module helps the model focus on diagnostically relevant regions within each slice. The dataset comprised NCCT scans from 300 subjects, including 117 males and 183 females, with an average age of 52.5 ± 20.9 years, consisting of 132 normal cases and 168 cases of lung diseases, including 88 instances of CTEPH. The CNMIL framework was evaluated using sensitivity, specificity, and the area under the curve (AUC) metrics, and compared with common 3D supervised classification networks and existing CTEPH automatic diagnosis networks. Main ResultsThe CNMIL framework demonstrated high diagnostic performance, achieving an AUC of 0.807, accuracy of 0.833, sensitivity of 0.795, and specificity of 0.849 in distinguishing CTEPH cases. Ablation studies revealed that integrating MIL and the cascade network significantly enhanced performance, with the model achieving an AUC of 0.993 and perfect sensitivity (1.000) in normal classification. Comparisons with other 3D network architectures confirmed that the integrated model outperformed others, achieving the highest AUC of 0.8419. SignificanceThe CNMIL network requires no additional scans or annotations, relying solely on NCCT. This approach can improve timely and accurate CTEPH detection, resulting in better patient outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    轻度认知障碍(MCI)是阿尔茨海默病(AD)的前驱阶段,会给护理和医疗费用带来巨大负担。临床上,MCI的诊断取决于5个认知域的损伤状态.如果这些认知领域之一受损,患者被诊断为MCI,如果五个领域中有两个受损,患者被诊断为AD。在医疗记录中,大部分时间,给出了MCI/AD的诊断,但不是五个域的状态。我们可以将域状态视为缺失变量。此诊断程序将MCI/AD状态建模与多实例学习联系起来,其中每个域类似于一个实例。然而,传统的多实例学习假设实例之间有共同的预测因素,但在我们的案例中,每个域都与不同的预测因子相关联。在这篇文章中,我们推广了多实例逻辑回归,以适应不同实例之间预测因子的异质性。所提出的模型被称为异构实例逻辑回归,由于存在缺失变量,因此通过期望最大化算法进行估计。我们还推导了用于MCI和AD诊断的所提出模型的两个变体。所提出的模型在估计精度方面得到了验证,潜在状态预测,通过广泛的仿真研究和鲁棒性。最后,我们使用提出的模型分析了国家阿尔茨海默氏症协调中心-统一数据集,并证明了其潜力。
    Mild cognitive impairment (MCI) is a prodromal stage of Alzheimer\'s disease (AD) that causes a significant burden in caregiving and medical costs. Clinically, the diagnosis of MCI is determined by the impairment statuses of five cognitive domains. If one of these cognitive domains is impaired, the patient is diagnosed with MCI, and if two out of the five domains are impaired, the patient is diagnosed with AD. In medical records, most of the time, the diagnosis of MCI/AD is given, but not the statuses of the five domains. We may treat the domain statuses as missing variables. This diagnostic procedure relates MCI/AD status modeling to multiple-instance learning, where each domain resembles an instance. However, traditional multiple-instance learning assumes common predictors among instances, but in our case, each domain is associated with different predictors. In this article, we generalized the multiple-instance logistic regression to accommodate the heterogeneity in predictors among different instances. The proposed model is dubbed heterogeneous-instance logistic regression and is estimated via the expectation-maximization algorithm because of the presence of the missing variables. We also derived two variants of the proposed model for the MCI and AD diagnoses. The proposed model is validated in terms of its estimation accuracy, latent status prediction, and robustness via extensive simulation studies. Finally, we analyzed the National Alzheimer\'s Coordinating Center-Uniform Data Set using the proposed model and demonstrated its potential.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    T细胞受体(TCR)是人体免疫系统的关键,了解其细微差别可以显着增强我们预测癌症相关免疫反应的能力。然而,现有的方法经常忽略T细胞受体(TCR)的序列内和序列间相互作用,限制了基于序列的癌症相关免疫状态预测的发展。为了应对这一挑战,我们提议BertTCR,一个创新的深度学习框架,旨在使用TCR预测癌症相关的免疫状态。BertTCR将预先训练的蛋白质大语言模型与深度学习架构相结合,使其能够从TCR中提取更深层次的上下文信息。与三种最先进的基于序列的方法相比,BertTCR将甲状腺癌检测的外部验证集的AUC提高了21个百分点。此外,该模型在2000多个公开可用的TCR库上进行了训练,涵盖了17种癌症和健康样本,它已经在多个公共外部数据集上验证了其区分癌症患者和健康个体的能力。此外,BertTCR可以准确地对各种癌症类型和健康个体进行分类。总的来说,BertTCR是基于TCR的癌症相关免疫状态预测的先进方法,为广泛的免疫状态预测任务提供了有希望的潜力。
    The T cell receptor (TCR) repertoire is pivotal to the human immune system, and understanding its nuances can significantly enhance our ability to forecast cancer-related immune responses. However, existing methods often overlook the intra- and inter-sequence interactions of T cell receptors (TCRs), limiting the development of sequence-based cancer-related immune status predictions. To address this challenge, we propose BertTCR, an innovative deep learning framework designed to predict cancer-related immune status using TCRs. BertTCR combines a pre-trained protein large language model with deep learning architectures, enabling it to extract deeper contextual information from TCRs. Compared to three state-of-the-art sequence-based methods, BertTCR improves the AUC on an external validation set for thyroid cancer detection by 21 percentage points. Additionally, this model was trained on over 2000 publicly available TCR libraries covering 17 types of cancer and healthy samples, and it has been validated on multiple public external datasets for its ability to distinguish cancer patients from healthy individuals. Furthermore, BertTCR can accurately classify various cancer types and healthy individuals. Overall, BertTCR is the advancing method for cancer-related immune status forecasting based on TCRs, offering promising potential for a wide range of immune status prediction tasks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在计算病理学领域中,基于多实例学习(MIL)的方法已经被广泛采用来处理整个幻灯片图像(WSI)。由于幻灯片级监督稀疏,这些方法通常在肿瘤区域缺乏良好的定位,导致可解释性差。此外,它们缺乏对预测结果的稳健不确定性估计,导致可靠性差。为了解决上述两个限制,我们提出了一个可解释和证据的多实例学习(E2-MIL)框架,用于整个幻灯片图像分类。E2-MIL主要由三个模块组成:细节感知注意蒸馏模块(DAM),结构感知注意力细化模块(SRM),和不确定性感知实例分类器(UIC)。具体来说,DAM通过利用互补的子袋从本地网络中学习详细的注意力知识,帮助全球网络找到更多细节感知的正面实例。此外,还引入了屏蔽的自指导损失,以帮助弥合幻灯片级别标签和实例级别分类任务之间的差距。SRM生成结构感知注意力图,其通过有效地对聚类实例之间的空间关系建模来定位整个肿瘤区域结构。此外,UIC提供准确的实例级分类结果和稳健的预测不确定性估计,以提高基于主观逻辑理论的模型可靠性。在三个大型多中心子类型数据集上进行的大量实验证明了E2-MIL的幻灯片级和实例级性能优势。
    Multiple instance learning (MIL)-based methods have been widely adopted to process the whole slide image (WSI) in the field of computational pathology. Due to the sparse slide-level supervision, these methods usually lack good localization on the tumor regions, leading to poor interpretability. Moreover, they lack robust uncertainty estimation of prediction results, leading to poor reliability. To solve the above two limitations, we propose an explainable and evidential multiple instance learning (E2-MIL) framework for whole slide image classification. E2-MIL is mainly composed of three modules: a detail-aware attention distillation module (DAM), a structure-aware attention refined module (SRM), and an uncertainty-aware instance classifier (UIC). Specifically, DAM helps the global network locate more detail-aware positive instances by utilizing the complementary sub-bags to learn detailed attention knowledge from the local network. In addition, a masked self-guidance loss is also introduced to help bridge the gap between the slide-level labels and instance-level classification tasks. SRM generates a structure-aware attention map that locates the entire tumor region structure by effectively modeling the spatial relations between clustering instances. Moreover, UIC provides accurate instance-level classification results and robust predictive uncertainty estimation to improve the model reliability based on subjective logic theory. Extensive experiments on three large multi-center subtyping datasets demonstrate both slide-level and instance-level performance superiority of E2-MIL.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:这项工作的目的是开发一种新颖的AI辅助体内剂量测定(IVD)方法,该方法使用时间分辨的剂量验证数据来提高外部束放射治疗的质量。 方法。尽管阈值分类方法通常用于错误分类,由于将多维电子射野成像设备(EPID)数据压缩为一个或几个数字而导致信息丢失,因此它们可能会导致丢失错误。最近的研究已经调查了活体EPID图像中时间积分(TI)的错误分类,卷积神经网络显示出希望。然而,以前已经观察到,TI方法可以抵消在动态治疗期间γ-图上的误差存在。为了解决这个限制,每个VMAT角度的模拟时间分辨(TR)γ图用于检测由复杂的患者几何结构和束布置引起的治疗误差。通常,这样的图像可以被解释为仅提供集合类标签的一组段。受最近对组织病理学图像的弱监督方法的启发,我们实现了基于变压器的多实例学习(MIL)方法,并利用了从TI到TRγ图的迁移学习。 主要结果。该算法在误差类型和误差大小的分类上表现良好。对于11类(错误类型)和22类(错误幅度)的治疗错误,测试集的准确性分别达到0.94和0.81,分别。 意义。TR剂量分布可以增强治疗交付决策,然而,由于这些数据的复杂性和数量,手动数据分析几乎是不可能的。我们提出的模型有效地处理数据复杂性,与利用TI数据的模型相比,大幅改进了处理错误分类。 .
    Objective.The aim of this work was to develop a novel artificial intelligence-assistedin vivodosimetry method using time-resolved (TR) dose verification data to improve quality of external beam radiotherapy.Approach. Although threshold classification methods are commonly used in error classification, they may lead to missing errors due to the loss of information resulting from the compression of multi-dimensional electronic portal imaging device (EPID) data into one or a few numbers. Recent research has investigated the classification of errors on time-integrated (TI)in vivoEPID images, with convolutional neural networks showing promise. However, it has been observed previously that TI approaches may cancel out the error presence onγ-maps during dynamic treatments. To address this limitation, simulated TRγ-maps for each volumetric modulated arc radiotherapy angle were used to detect treatment errors caused by complex patient geometries and beam arrangements. Typically, such images can be interpreted as a set of segments where only set class labels are provided. Inspired by recent weakly supervised approaches on histopathology images, we implemented a transformer based multiple instance learning approach and utilized transfer learning from TI to TRγ-maps.Main results. The proposed algorithm performed well on classification of error type and error magnitude. The accuracy in the test set was up to 0.94 and 0.81 for 11 (error type) and 22 (error magnitude) classes of treatment errors, respectively.Significance. TR dose distributions can enhance treatment delivery decision-making, however manual data analysis is nearly impossible due to the complexity and quantity of this data. Our proposed model efficiently handles data complexity, substantially improving treatment error classification compared to models that leverage TI data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    基于组织病理学图像的生存预测旨在提供癌症预后的精确评估,并且可以告知个性化的治疗决策,以改善患者的预后。然而,现有方法无法自动对每个整张幻灯片图像(WSI)中的许多形态多样的小块之间的复杂相关性进行建模,从而阻止他们对患者状况有更深刻的理解和推断。为了解决这个问题,在这里,我们提出了一个新的深度学习框架,称为双流多依赖图神经网络(DM-GNN),以实现精确的癌症患者生存分析。具体来说,DM-GNN具有特征更新和全局分析分支,可以基于形态亲和力和全局共激活依赖性将每个WSI更好地建模为两个图。由于这两个依赖性从不同但互补的角度描绘了每个WSI,DM-GNN的两个设计分支可以共同实现补丁之间复杂相关性的多视图建模。此外,DM-GNN还能够通过引入亲和性引导注意力重新校准模块作为读出功能来提高图形构造期间依赖性信息的利用。这个新颖的模块提供了对特征扰动的增强的鲁棒性,从而确保更可靠和稳定的预测。在五个TCGA数据集上进行的广泛基准测试实验表明,DM-GNN优于其他最先进的方法,并基于高注意力补丁的形态学描述提供了可解释的预测见解。总的来说,DM-GNN代表了从组织病理学图像中个性化癌症预后的强大辅助工具,并且具有帮助临床医生做出个性化治疗决策和改善患者预后的巨大潜力。
    Histopathology image-based survival prediction aims to provide a precise assessment of cancer prognosis and can inform personalized treatment decision-making in order to improve patient outcomes. However, existing methods cannot automatically model the complex correlations between numerous morphologically diverse patches in each whole slide image (WSI), thereby preventing them from achieving a more profound understanding and inference of the patient status. To address this, here we propose a novel deep learning framework, termed dual-stream multi-dependency graph neural network (DM-GNN), to enable precise cancer patient survival analysis. Specifically, DM-GNN is structured with the feature updating and global analysis branches to better model each WSI as two graphs based on morphological affinity and global co-activating dependencies. As these two dependencies depict each WSI from distinct but complementary perspectives, the two designed branches of DM-GNN can jointly achieve the multi-view modeling of complex correlations between the patches. Moreover, DM-GNN is also capable of boosting the utilization of dependency information during graph construction by introducing the affinity-guided attention recalibration module as the readout function. This novel module offers increased robustness against feature perturbation, thereby ensuring more reliable and stable predictions. Extensive benchmarking experiments on five TCGA datasets demonstrate that DM-GNN outperforms other state-of-the-art methods and offers interpretable prediction insights based on the morphological depiction of high-attention patches. Overall, DM-GNN represents a powerful and auxiliary tool for personalized cancer prognosis from histopathology images and has great potential to assist clinicians in making personalized treatment decisions and improving patient outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • DOI:
    文章类型: Journal Article
    整个幻灯片图像(WSI),通过高分辨率数字扫描显微镜载玻片在多个尺度上获得,是现代数字病理学的基石。然而,它们代表了基于AI/AI介导的分析的特殊挑战,因为病理标记通常在幻灯片级别完成,而不是平铺级。不仅仅是医学诊断记录在样本级别,癌基因突变的检测也是通过实验获得的,并由癌症基因组图谱(TCGA)等计划记录,在幻灯片级别。这构成了双重挑战:a)准确预测总体癌症表型和b)找出在平铺水平上与其相关的细胞形态。为了应对这些挑战,针对两种流行的癌症类型,探索了一种弱监督多实例学习(MIL)方法,浸润性乳腺癌(TCGA-BRCA)和肺鳞癌(TCGA-LUSC)。探索了这种方法用于低放大倍数水平的肿瘤检测和各种水平的TP53突变。我们的结果表明,MIL的新型附加实现与参考实现的性能相匹配(AUC0.96),并且仅略微优于注意MIL(AUC0.97)。更有趣的是,从分子病理学家的角度来看,这些不同的人工智能架构识别出对形态特征的不同敏感性(通过检测感兴趣的区域,不同扩增水平的RoI)。很明显,TP53突变对细胞形态得以解决的较高应用中的特征最敏感。
    Whole Slide Images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology. However, they represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level. It is not just that medical diagnostics is recorded at the specimen level, the detection of oncogene mutation is also experimentally obtained, and recorded by initiatives like The Cancer Genome Atlas (TCGA), at the slide level. This configures a dual challenge: a) accurately predicting the overall cancer phenotype and b) finding out what cellular morphologies are associated with it at the tile level. To address these challenges, a weakly supervised Multiple Instance Learning (MIL) approach was explored for two prevalent cancer types, Invasive Breast Carcinoma (TCGA-BRCA) and Lung Squamous Cell Carcinoma (TCGA-LUSC). This approach was explored for tumor detection at low magnification levels and TP53 mutations at various levels. Our results show that a novel additive implementation of MIL matched the performance of reference implementation (AUC 0.96), and was only slightly outperformed by Attention MIL (AUC 0.97). More interestingly from the perspective of the molecular pathologist, these different AI architectures identify distinct sensitivities to morphological features (through the detection of Regions of Interest, RoI) at different amplification levels. Tellingly, TP53 mutation was most sensitive to features at the higher applications where cellular morphology is resolved.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:数字整片图像(WSI)的出现推动了计算病理学的发展。然而,由于WSI的高分辨率,获得补丁级别的注释是具有挑战性和耗时的,这限制了完全监督方法的适用性。我们的目标是解决与补丁级别注释相关的挑战。
    方法:我们提出了一种基于多实例学习(MIL)的弱监督WSI分析的通用框架。为了实现实例功能的有效聚合,我们通过考虑特征分布,从多个维度设计了一个特征聚合模块,实例相关性和实例级评估。首先,我们实现了实例级标准化层和深度投影单元,以改善特征空间中实例的分离。然后,采用自我注意机制来探索实例之间的依赖关系。此外,引入了一种实例级伪标签评估方法,以增强弱监督过程中的可用信息。最后,使用袋级分类器来获得初步的WSI分类结果。为了实现更准确的WSI标签预测,我们设计了一个关键实例选择模块,加强了实例本地特征的学习。组合来自两个模块的结果导致WSI预测准确性的提高。
    结果:对Camelyon16,TCGA-NSCLC,SICAPv2,PANDA和经典的MIL基准测试数据集表明,与一些最近的方法相比,我们提出的方法具有竞争力。在分类精度方面最大提高14.6%。
    结论:我们的方法可以以弱监督的方式提高整个幻灯片图像的分类精度,更准确地检测病变区域。
    BACKGROUND: The emergence of digital whole slide image (WSI) has driven the development of computational pathology. However, obtaining patch-level annotations is challenging and time-consuming due to the high resolution of WSI, which limits the applicability of fully supervised methods. We aim to address the challenges related to patch-level annotations.
    METHODS: We propose a universal framework for weakly supervised WSI analysis based on Multiple Instance Learning (MIL). To achieve effective aggregation of instance features, we design a feature aggregation module from multiple dimensions by considering feature distribution, instances correlation and instance-level evaluation. First, we implement instance-level standardization layer and deep projection unit to improve the separation of instances in the feature space. Then, a self-attention mechanism is employed to explore dependencies between instances. Additionally, an instance-level pseudo-label evaluation method is introduced to enhance the available information during the weak supervision process. Finally, a bag-level classifier is used to obtain preliminary WSI classification results. To achieve even more accurate WSI label predictions, we have designed a key instance selection module that strengthens the learning of local features for instances. Combining the results from both modules leads to an improvement in WSI prediction accuracy.
    RESULTS: Experiments conducted on Camelyon16, TCGA-NSCLC, SICAPv2, PANDA and classical MIL benchmark datasets demonstrate that our proposed method achieves a competitive performance compared to some recent methods, with maximum improvement of 14.6 % in terms of classification accuracy.
    CONCLUSIONS: Our method can improve the classification accuracy of whole slide images in a weakly supervised way, and more accurately detect lesion areas.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    图卷积神经网络在自然和组织病理学图像中显示出巨大的潜力。然而,它们的使用只在单放大或多放大与均匀的图形或仅不同的节点类型进行了研究。为了利用多放大倍数信息并改善图卷积网络的消息传递,我们通过引入多尺度关系图卷积网络(MS-RGCN)作为多实例学习方法,在每次放大处理不同的嵌入空间。我们对组织病理学图像斑块及其与相邻斑块和其他尺度上的斑块的关系进行建模(即,放大倍数)作为图形。我们根据节点和边缘类型定义单独的消息传递神经网络,以在不同的放大嵌入空间之间传递信息。我们对前列腺癌组织病理学图像进行实验,以根据从补丁中提取的特征来预测等级组。我们还将MS-RGCN与多种最新方法进行了比较,并对多个来源和保留数据集进行了评估。我们的方法在由组织微阵列组成的所有数据集和图像类型上都优于最先进的技术,整个安装幻灯片区域,和整个幻灯片图像。通过消融研究,我们测试并显示MS-RGCN的相关设计功能的价值。
    Graph convolutional neural networks have shown significant potential in natural and histopathology images. However, their use has only been studied in a single magnification or multi-magnification with either homogeneous graphs or only different node types. In order to leverage the multi-magnification information and improve message passing with graph convolutional networks, we handle different embedding spaces at each magnification by introducing the Multi-Scale Relational Graph Convolutional Network (MS-RGCN) as a multiple instance learning method. We model histopathology image patches and their relation with neighboring patches and patches at other scales (i.e., magnifications) as a graph. We define separate message-passing neural networks based on node and edge types to pass the information between different magnification embedding spaces. We experiment on prostate cancer histopathology images to predict the grade groups based on the extracted features from patches. We also compare our MS-RGCN with multiple state-of-the-art methods with evaluations on several source and held-out datasets. Our method outperforms the state-of-the-art on all of the datasets and image types consisting of tissue microarrays, whole-mount slide regions, and whole-slide images. Through an ablation study, we test and show the value of the pertinent design features of the MS-RGCN.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号