Computational Pathology

计算病理学
  • 文章类型: Journal Article
    在数字病理学中,在组织病理学图像中准确检测有丝分裂对于癌症的诊断和预后至关重要。然而,由于细胞形态的固有变异性和结构域移位问题,这仍然具有挑战性。这项研究介绍了CNMI-YOLO(ConvNext有丝分裂鉴定-YOLO),一种新的两阶段深度学习方法,使用YOLOv7架构进行小区检测,ConvNeXt架构进行小区分类。目的是改善不同类型癌症中有丝分裂的鉴定。我们在实验中使用了MIDOG2022数据集,以确保模型在各种扫描仪中的稳健性和成功。物种,和癌症类型。CNMI-YOLO模型在准确检测有丝分裂细胞方面表现出卓越的性能,在精度方面显著优于现有模型,召回,和F1得分。CNMI-YOLO模型在MIDOG2022上获得了0.795的F1评分,并在外部黑色素瘤和肉瘤测试集上表现出了0.783和0.759的F1评分,分别。此外,该研究包括消融研究,以评估各种对象检测和分类模型,比如FasterR-CNN和SwinTransformer.此外,我们评估了模型在看不见的数据上的稳健性性能,确认其概括能力及其在数字病理学中的现实使用潜力,使用未包含在训练数据集中的软组织肉瘤和黑色素瘤样本。
    In digital pathology, accurate mitosis detection in histopathological images is critical for cancer diagnosis and prognosis. However, this remains challenging due to the inherent variability in cell morphology and the domain shift problem. This study introduces CNMI-YOLO (ConvNext Mitosis Identification-YOLO), a new two-stage deep learning method that uses the YOLOv7 architecture for cell detection and the ConvNeXt architecture for cell classification. The goal is to improve the identification of mitosis in different types of cancer. We utilized the MIDOG 2022 dataset in the experiments to ensure the model\'s robustness and success across various scanners, species, and cancer types. The CNMI-YOLO model demonstrates superior performance in accurately detecting mitotic cells, significantly outperforming existing models in terms of precision, recall, and F1-score. The CNMI-YOLO model achieved an F1-score of 0.795 on the MIDOG 2022 and demonstrated robust generalization with F1-scores of 0.783 and 0.759 on the external melanoma and sarcoma test sets, respectively. Additionally, the study included ablation studies to evaluate various object detection and classification models, such as Faster R-CNN and Swin Transformer. Furthermore, we assessed the model\'s robustness performance on unseen data, confirming its ability to generalize and its potential for real-world use in digital pathology, using soft tissue sarcoma and melanoma samples not included in the training dataset.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    病理学中可靠的人工智能(AI)算法的发展通常取决于整张幻灯片图像(WSI)注释所提供的地面实况,一个耗时且依赖于操作员的过程。对不同注释方法进行了比较分析,以简化此过程。两名病理学家使用半自动化(SegmentAnythingModel,SAM))和手动设备(触摸板与鼠标)。在工作时间方面进行了比较,再现性(重叠分数),和精度(0到10精度由两个专家的肾病理学家评定)在不同的方法和操作。评价了不同显示器对小鼠性能的影响。注释集中在三个组织区室:小管(57注释),肾小球(53个注释),和动脉(58注释)。半自动方法是最快的,观察者之间的可变性最小,平均13.6±0.2min,差值(Δ)为2%,其次是小鼠(29.9±10.2,Δ=24%),和触摸板(47.5±19.6分钟,Δ=45%)。使用SAM可实现小管和肾小球的最高再现性(重叠值为1和0.99,而鼠标为0.97,触摸板为0.94和0.93),尽管SAM在动脉中的可重复性较低(与鼠标和触摸板的0.94相比,重叠值为0.89)。在操作者之间没有观察到精度差异(p=0.59)。使用非医疗显示器将注释时间增加了6.1%。未来采用半自动和人工智能辅助方法可以显著加快注释过程。改善AI工具开发的真相。
    The development of reliable artificial intelligence (AI) algorithms in pathology often depends on ground truth provided by annotation of whole slide images (WSI), a time-consuming and operator-dependent process. A comparative analysis of different annotation approaches is performed to streamline this process. Two pathologists annotated renal tissue using semi-automated (Segment Anything Model, SAM)) and manual devices (touchpad vs mouse). A comparison was conducted in terms of working time, reproducibility (overlap fraction), and precision (0 to 10 accuracy rated by two expert nephropathologists) among different methods and operators. The impact of different displays on mouse performance was evaluated. Annotations focused on three tissue compartments: tubules (57 annotations), glomeruli (53 annotations), and arteries (58 annotations). The semi-automatic approach was the fastest and had the least inter-observer variability, averaging 13.6 ± 0.2 min with a difference (Δ) of 2%, followed by the mouse (29.9 ± 10.2, Δ = 24%), and the touchpad (47.5 ± 19.6 min, Δ = 45%). The highest reproducibility in tubules and glomeruli was achieved with SAM (overlap values of 1 and 0.99 compared to 0.97 for the mouse and 0.94 and 0.93 for the touchpad), though SAM had lower reproducibility in arteries (overlap value of 0.89 compared to 0.94 for both the mouse and touchpad). No precision differences were observed between operators (p = 0.59). Using non-medical monitors increased annotation times by 6.1%. The future employment of semi-automated and AI-assisted approaches can significantly speed up the annotation process, improving the ground truth for AI tool development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    新的治疗剂如抗体-药物偶联物的掺入,无线电共轭,T细胞衔接者,嵌合抗原受体细胞疗法代表了肿瘤学的范式转变。细胞表面靶标定量,受体内化的定量评估,和肿瘤微环境(TME)的变化是生物标志物开发中用于患者选择和治疗反应的重要变量。评估这些参数需要超越基于免疫组织化学的传统生物标志物方法的能力,原位杂交和/或测序测定。计算病理学正在成为这种新的治疗领域的变革性解决方案,不仅能够详细评估目标的存在,表达水平,以及肿瘤细胞及其周围TME的其他表型特征。这里,我们描述了计算病理学在增强这些先进疗法的功效和特异性方面的关键作用,强调了新型人工智能模型的整合,这些模型有望彻底改变生物标志物的发现和药物开发。
    The incorporation of novel therapeutic agents such as antibody-drug conjugates, radio-conjugates, T-cell engagers, and chimeric antigen receptor cell therapies represents a paradigm shift in oncology. Cell-surface target quantification, quantitative assessment of receptor internalization, and changes in the tumor microenvironment (TME) are essential variables in the development of biomarkers for patient selection and therapeutic response. Assessing these parameters requires capabilities that transcend those of traditional biomarker approaches based on immunohistochemistry, in situ hybridization and/or sequencing assays. Computational pathology is emerging as a transformative solution in this new therapeutic landscape, enabling detailed assessment of not only target presence, expression levels, and intra-tumor distribution but also of additional phenotypic features of tumor cells and their surrounding TME. Here, we delineate the pivotal role of computational pathology in enhancing the efficacy and specificity of these advanced therapeutics, underscoring the integration of novel artificial intelligence models that promise to revolutionize biomarker discovery and drug development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    胃癌是全球第五大最常见和第四致命的癌症,惨淡的5年生存率约为20%。尽管对其病理生物学进行了大量研究,由于病理学家的繁重工作量和潜在的诊断错误,预后的可预测性仍然不足.因此,迫切需要自动化和精确的组织病理学诊断工具。这项研究利用机器学习和深度学习技术将组织病理学图像分为健康和癌症类别。通过在GasHisSDB数据集上利用手工制作和深层特征以及浅层学习分类器,我们进行了比较分析,以确定最有效的特征和分类器组合,从而在不采用微调策略的情况下区分正常和异常组织病理学图像.我们的方法使用SVM分类器实现了95%的准确率,强调特征融合策略的有效性。此外,在具有不同分辨率的看不见的测试图像上测试模型时,交叉放大实验产生了有希望的结果,精度接近80%和90%。
    Gastric cancer is the fifth most common and fourth deadliest cancer worldwide, with a bleak 5-year survival rate of about 20%. Despite significant research into its pathobiology, prognostic predictability remains insufficient due to pathologists\' heavy workloads and the potential for diagnostic errors. Consequently, there is a pressing need for automated and precise histopathological diagnostic tools. This study leverages Machine Learning and Deep Learning techniques to classify histopathological images into healthy and cancerous categories. By utilizing both handcrafted and deep features and shallow learning classifiers on the GasHisSDB dataset, we conduct a comparative analysis to identify the most effective combinations of features and classifiers for differentiating normal from abnormal histopathological images without employing fine-tuning strategies. Our methodology achieves an accuracy of 95% with the SVM classifier, underscoring the effectiveness of feature fusion strategies. Additionally, cross-magnification experiments produced promising results with accuracies close to 80% and 90% when testing the models on unseen testing images with different resolutions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    成像技术的进步彻底改变了我们深入描绘病理组织结构的能力,生成具有无与伦比的空间分辨率的大量成像数据。这种类型的数据收集,即,空间蛋白质组学,提供了对各种人类疾病的宝贵见解。同时,计算算法已经发展到管理空间蛋白质组学在这一进程中固有的不断增加的维度。众多基于成像的计算框架,比如计算病理学,已被提出用于研究和临床应用。然而,这些领域的发展需要不同的领域专业知识,对它们的整合和进一步应用造成障碍。这篇综述旨在通过提出一个全面的指导方针来弥合这一鸿沟。我们巩固了流行的计算方法,并概述了从图像处理到数据驱动的路线图,统计信息生物标志物发现。此外,随着该领域与其他定量领域的对接,我们探索未来的观点,在免疫肿瘤学的精准护理方面有着重要的希望。
    Advancements in imaging technologies have revolutionized our ability to deeply profile pathological tissue architectures, generating large volumes of imaging data with unparalleled spatial resolution. This type of data collection, namely, spatial proteomics, offers invaluable insights into various human diseases. Simultaneously, computational algorithms have evolved to manage the increasing dimensionality of spatial proteomics inherent in this progress. Numerous imaging-based computational frameworks, such as computational pathology, have been proposed for research and clinical applications. However, the development of these fields demands diverse domain expertise, creating barriers to their integration and further application. This review seeks to bridge this divide by presenting a comprehensive guideline. We consolidate prevailing computational methods and outline a roadmap from image processing to data-driven, statistics-informed biomarker discovery. Additionally, we explore future perspectives as the field moves toward interfacing with other quantitative domains, holding significant promise for precision care in immuno-oncology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    越来越多的生物医学数据为开发新的深度学习算法提供了宝贵的资源,以支持专家。尤其是在那些收集大量注释数据并不简单的领域中。生物医学数据包括几种包含补充信息的模式,例如医学图像和报告:图像通常很大,并且编码低级信息,而报告包括对数据中确定的发现的总结的高级描述,并且通常仅涉及图像的一小部分。然而,只有少数方法可以有效地将图像的视觉内容与报告的文本内容联系起来,防止医学专家从深度学习模型提供的近期机会中受益。本文介绍了一种多模式体系结构,该体系结构创建了一个强大的生物医学数据表示,在图像嵌入中编码细粒度的文本表示。该架构旨在解决数据稀缺(结合监督和自监督学习),并创建多模式生物医学本体。该体系结构在超过6,000个结肠全幻灯片图像(WSI)上进行了培训,与相应的报告配对,从两个数字病理学工作流程中收集。多模式体系结构的评估涉及三个任务:WSI分类(来自病理工作流程和公共存储库的数据),多模态数据检索,以及文本和视觉概念之间的联系。值得注意的是,后两项任务由建筑设计提供,无需进一步培训,表明多模式架构可以作为解决特殊任务的骨干。在结肠WSI分类上,多模态数据表示优于单峰数据表示,并允许将达到准确性能所需的数据减半,减少所需的计算能力,从而减少碳足迹。利用自我监督算法的图像和报告的组合允许挖掘数据库,而无需专家提供新的注释,提取新信息。特别是,多模态视觉本体论,将语义概念链接到图像,可能为医学和生物医学分析领域的进步铺平道路,不限于组织病理学。
    The increasing availability of biomedical data creates valuable resources for developing new deep learning algorithms to support experts, especially in domains where collecting large volumes of annotated data is not trivial. Biomedical data include several modalities containing complementary information, such as medical images and reports: images are often large and encode low-level information, while reports include a summarized high-level description of the findings identified within data and often only concerning a small part of the image. However, only a few methods allow to effectively link the visual content of images with the textual content of reports, preventing medical specialists from properly benefitting from the recent opportunities offered by deep learning models. This paper introduces a multimodal architecture creating a robust biomedical data representation encoding fine-grained text representations within image embeddings. The architecture aims to tackle data scarcity (combining supervised and self-supervised learning) and to create multimodal biomedical ontologies. The architecture is trained on over 6,000 colon whole slide Images (WSI), paired with the corresponding report, collected from two digital pathology workflows. The evaluation of the multimodal architecture involves three tasks: WSI classification (on data from pathology workflow and from public repositories), multimodal data retrieval, and linking between textual and visual concepts. Noticeably, the latter two tasks are available by architectural design without further training, showing that the multimodal architecture that can be adopted as a backbone to solve peculiar tasks. The multimodal data representation outperforms the unimodal one on the classification of colon WSIs and allows to halve the data needed to reach accurate performance, reducing the computational power required and thus the carbon footprint. The combination of images and reports exploiting self-supervised algorithms allows to mine databases without needing new annotations provided by experts, extracting new information. In particular, the multimodal visual ontology, linking semantic concepts to images, may pave the way to advancements in medicine and biomedical analysis domains, not limited to histopathology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:胰腺神经内分泌肿瘤(panNET)患者R0切除术后肝转移对预后有显著影响。结合计算病理学和深度学习影像组学可以增强panNET患者术后肝转移的检测。
    方法:临床数据,病理学幻灯片,收集了复旦大学上海肿瘤中心(FUSCC)和FUSCC病理咨询中心R0切除术后的163例panNET患者的X线图像。数字图像分析和深度学习在Ki67染色的整个载玻片图像(WSI)和增强CT扫描中识别出肝转移相关特征,以创建列线图。该模型的性能在内部和外部测试队列中都得到了验证。
    结果:多因素logistic回归分析确定神经浸润是肝转移的独立危险因素(p<0.05)。Pathomics评分,这是基于热点和Ki67染色的异质性分布,显示肝转移的预测准确性提高(AUC=0.799)。深度学习-影像组学(DLR)评分的AUC为0.875。综合列线图,结合临床,病态,和成像功能,表现突出,训练队列的AUC为0.985,验证队列的AUC为0.961。高危组的中位无复发生存期为28.5个月,而低危组的中位无复发生存期为34.7个月。与预后显著相关(p<0.05)。
    结论:整合了计算病理学评分和深度学习影像组学的新预测模型可以更好地预测panNET患者术后肝转移,帮助临床医生开发个性化治疗。
    BACKGROUND: Postoperative liver metastasis significantly impacts the prognosis of pancreatic neuroendocrine tumor (panNET) patients after R0 resection. Combining computational pathology and deep learning radiomics can enhance the detection of postoperative liver metastasis in panNET patients.
    METHODS: Clinical data, pathology slides, and radiographic images were collected from 163 panNET patients post-R0 resection at Fudan University Shanghai Cancer Center (FUSCC) and FUSCC Pathology Consultation Center. Digital image analysis and deep learning identified liver metastasis-related features in Ki67-stained whole slide images (WSIs) and enhanced CT scans to create a nomogram. The model\'s performance was validated in both internal and external test cohorts.
    RESULTS: Multivariate logistic regression identified nerve infiltration as an independent risk factor for liver metastasis (p < 0.05). The Pathomics score, which was based on a hotspot and the heterogeneous distribution of Ki67 staining, showed improved predictive accuracy for liver metastasis (AUC = 0.799). The deep learning-radiomics (DLR) score achieved an AUC of 0.875. The integrated nomogram, which combines clinical, pathological, and imaging features, demonstrated outstanding performance, with an AUC of 0.985 in the training cohort and 0.961 in the validation cohort. High-risk group had a median recurrence-free survival of 28.5 months compared to 34.7 months for the low-risk group, showing significant correlation with prognosis (p < 0.05).
    CONCLUSIONS: A new predictive model that integrates computational pathologic scores and deep learning-radiomics can better predict postoperative liver metastasis in panNET patients, aiding clinicians in developing personalized treatments.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:StratipathBreast是一种CE-IVD标记的基于人工智能的解决方案,用于将乳腺癌患者分为高危和低危人群,使用苏木精和伊红(H&E)染色的组织病理学整片图像(WSI)。在这项验证研究中,我们在两个独立的乳腺癌队列中评估了Stratipath乳腺癌的预后表现.
    方法:这项回顾性多站点验证研究包括来自两家瑞典医院的2719例原发性乳腺癌患者。基于来自手术切除的肿瘤的诊断性H&E染色的组织切片的数字化WSI,应用Stratipath乳房工具对患者进行分层。以无进展生存期(PFS)为主要终点,通过多变量Cox比例风险分析评估预后表现。
    结果:在临床相关雌激素受体(ER)阳性/人表皮生长因子受体2(HER2)阴性患者亚组中,在校正确定的危险因素后,低危组和高危组之间与PFS相关的估计风险比(HR)为2.76(95%CI:1.63-4.66,p值<0.001).在ER+/HER2-诺丁汉组织学分级(NHG)2亚组中,低危组和高危组的HR为2.20(95%CI:1.22-3.98,p值=0.009).
    结论:结果表明,在所有乳腺癌患者中,分层乳房具有独立的预后价值,以及临床相关ER+/HER2-亚组和NHG2/ER+/HER2-亚组。改善中风险ER+/HER2-乳腺癌的风险分层提供了与辅助化疗的治疗决定相关的信息,并有可能减少治疗不足和过度。与分子诊断相比,基于图像的风险分层提供了更短的交付时间和更低的成本,因此有可能覆盖更广泛的患者群体。
    BACKGROUND: Stratipath Breast is a CE-IVD marked artificial intelligence-based solution for prognostic risk stratification of breast cancer patients into high- and low-risk groups, using haematoxylin and eosin (H&E)-stained histopathology whole slide images (WSIs). In this validation study, we assessed the prognostic performance of Stratipath Breast in two independent breast cancer cohorts.
    METHODS: This retrospective multi-site validation study included 2719 patients with primary breast cancer from two Swedish hospitals. The Stratipath Breast tool was applied to stratify patients based on digitised WSIs of the diagnostic H&E-stained tissue sections from surgically resected tumours. The prognostic performance was evaluated using time-to-event analysis by multivariable Cox Proportional Hazards analysis with progression-free survival (PFS) as the primary endpoint.
    RESULTS: In the clinically relevant oestrogen receptor (ER)-positive/human epidermal growth factor receptor 2 (HER2)-negative patient subgroup, the estimated hazard ratio (HR) associated with PFS between low- and high-risk groups was 2.76 (95% CI: 1.63-4.66, p-value < 0.001) after adjusting for established risk factors. In the ER+/HER2- Nottingham histological grade (NHG) 2 subgroup, the HR was 2.20 (95% CI: 1.22-3.98, p-value = 0.009) between low- and high-risk groups.
    CONCLUSIONS: The results indicate an independent prognostic value of Stratipath Breast among all breast cancer patients, as well as in the clinically relevant ER+/HER2- subgroup and the NHG2/ER+/HER2- subgroup. Improved risk stratification of intermediate-risk ER+/HER2- breast cancers provides information relevant for treatment decisions of adjuvant chemotherapy and has the potential to reduce both under- and overtreatment. Image-based risk stratification provides the added benefit of short lead times and substantially lower cost compared to molecular diagnostics and therefore has the potential to reach broader patient groups.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    近年来,细胞病理学的实践得到了显着改善,主要通过为特定标本的诊断创建共识规则集(Bethesda,米兰,巴黎,等等)。总的来说,这些诊断系统专注于减少观察者内部的差异,删除模糊/冗余类别,减少“非典型”诊断的使用,并促进定量评分系统的使用,同时提供统一的语言来传达这些结果。计算病理学是该过程的自然分支,因为它承诺通过定量过程提供100%可重复的诊断,而没有人类从业者的许多偏见。
    The practice of cytopathology has been significantly refined in recent years, largely through the creation of consensus rule sets for the diagnosis of particular specimens (Bethesda, Milan, Paris, and so forth). In general, these diagnostic systems have focused on reducing intraobserver variance, removing nebulous/redundant categories, reducing the use of \"atypical\" diagnoses, and promoting the use of quantitative scoring systems while providing a uniform language to communicate these results. Computational pathology is a natural offshoot of this process in that it promises 100% reproducible diagnoses rendered by quantitative processes that are free from many of the biases of human practitioners.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在早期胃癌(EGC)中,淋巴结转移(LNM)的存在是决定治疗方案的关键因素.内镜切除术用于治疗EGC,而LNM的风险最小。然而,由于缺乏明确的标准来识别需要额外手术的患者,一些患者接受不必要的额外手术。考虑到组织病理学模式是预测胃癌淋巴结转移的重要因素,我们旨在开发一种机器学习算法,该算法可以使用苏木精和伊红(H&E)染色的图像来预测LNM状态。这些图像是从几个机构获得的。我们的管道包括两种顺序方法,包括特征提取器和风险分类器。对于特征提取器,对3个数据集的243个WSI进行了分割网络(DeepLabV3+)训练,以区分每种组织学亚型.使用从训练的特征提取器推断的70个形态特征用XGBoost训练风险分类器。经过训练的分割网络,特征提取器,实现了高性能,补丁级别的内部和外部数据集的像素精度为0.9348和0.8939,分别。风险分类器在预测LNM状态时实现了0.75的总体AUC。值得注意的是,其中一个数据集也显示了一个有希望的结果,AUC为0.92。这是第一个多机构研究开发机器学习算法,用于使用H&E染色的组织病理学图像预测EGC患者的LNM状态。我们的发现有可能改善在具有高风险组织学特征的EGC患者中选择需要手术的患者。
    In early gastric cancer (EGC), the presence of lymph node metastasis (LNM) is a crucial factor for determining the treatment options. Endoscopic resection is used for treatment of EGC with minimal risk of LNM. However, owing to the lack of definitive criteria for identifying patients who require additional surgery, some patients undergo unnecessary additional surgery. Considering that histopathologic patterns are significant factor for predicting lymph node metastasis in gastric cancer, we aimed to develop a machine learning algorithm which can predict LNM status using hematoxylin and eosin (H&E)-stained images. The images were obtained from several institutions. Our pipeline comprised two sequential approaches including a feature extractor and a risk classifier. For the feature extractor, a segmentation network (DeepLabV3+) was trained on 243 WSIs across three datasets to differentiate each histological subtype. The risk classifier was trained with XGBoost using 70 morphological features inferred from the trained feature extractor. The trained segmentation network, the feature extractor, achieved high performance, with pixel accuracies of 0.9348 and 0.8939 for the internal and external datasets in patch level, respectively. The risk classifier achieved an overall AUC of 0.75 in predicting LNM status. Remarkably, one of the datasets also showed a promising result with an AUC of 0.92. This is the first multi-institution study to develop machine learning algorithm for predicting LNM status in patients with EGC using H&E-stained histopathology images. Our findings have the potential to improve the selection of patients who require surgery among those with EGC showing high-risk histological features.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号