pathology image

病理影像
  • 文章类型: Journal Article
    基础模型的最新进展彻底改变了数字病理学中的模型开发,减少对传统方法所需的大量手动注释的依赖。基础模型通过少量学习来很好地概括的能力解决了使模型适应各种医学成像任务的关键障碍。这项工作提出了颗粒盒提示段任意模型(GB-SAM),段任意模型(SAM)的改进版本,使用粒度框提示和有限的训练数据进行微调。GB-SAM旨在通过提高自动注释过程的效率来减少对专家病理学家注释器的依赖。颗粒框提示是源自地面实况蒙版的小框区域,设想取代使用覆盖整个H&E染色图像块的单个大框的常规方法。这种方法允许腺体形态的局部和详细分析,提高了单个腺体的分割精度,并减少了较大的盒子可能在形态复杂的区域中引入的歧义。我们比较了GB-SAM模型与在不同大小的CRAG数据集上训练的U-Net的性能。我们评估了组织病理学数据集的模型,包括CRAG,Glas,和Camelyon16.GB-SAM的表现始终优于U-Net,随着训练数据的减少,显示较少的分段性能下降。具体来说,在CRAG数据集上,当在25%的数据上训练时,GB-SAM获得的骰子系数为0.885,而U-Net为0.857。此外,GB-SAM在CRAG测试数据集上展示了分段稳定性,并且在看不见的数据集上具有出色的泛化能力。包括在Camelyon16中具有挑战性的淋巴结分割,其Dice系数为0.740,而U-Net为0.491。此外,与SAM-Path和Med-SAM相比,GB-SAM表现出竞争力。GB-SAM在CRAG数据集上的骰子得分为0.900,而SAM-Path达到0.884。在GLS数据集上,Med-SAM报告Dice得分为0.956,而GB-SAM获得0.885,训练数据明显较少。这些结果突出了GB-SAM的高级分段功能和对大型数据集的减少的依赖,表明其在数字病理学中的实际应用潜力,特别是在具有有限注释数据集的设置中。
    Recent advances in foundation models have revolutionized model development in digital pathology, reducing dependence on extensive manual annotations required by traditional methods. The ability of foundation models to generalize well with few-shot learning addresses critical barriers in adapting models to diverse medical imaging tasks. This work presents the Granular Box Prompt Segment Anything Model (GB-SAM), an improved version of the Segment Anything Model (SAM) fine-tuned using granular box prompts with limited training data. The GB-SAM aims to reduce the dependency on expert pathologist annotators by enhancing the efficiency of the automated annotation process. Granular box prompts are small box regions derived from ground truth masks, conceived to replace the conventional approach of using a single large box covering the entire H&E-stained image patch. This method allows a localized and detailed analysis of gland morphology, enhancing the segmentation accuracy of individual glands and reducing the ambiguity that larger boxes might introduce in morphologically complex regions. We compared the performance of our GB-SAM model against U-Net trained on different sizes of the CRAG dataset. We evaluated the models across histopathological datasets, including CRAG, GlaS, and Camelyon16. GB-SAM consistently outperformed U-Net, with reduced training data, showing less segmentation performance degradation. Specifically, on the CRAG dataset, GB-SAM achieved a Dice coefficient of 0.885 compared to U-Net\'s 0.857 when trained on 25% of the data. Additionally, GB-SAM demonstrated segmentation stability on the CRAG testing dataset and superior generalization across unseen datasets, including challenging lymph node segmentation in Camelyon16, which achieved a Dice coefficient of 0.740 versus U-Net\'s 0.491. Furthermore, compared to SAM-Path and Med-SAM, GB-SAM showed competitive performance. GB-SAM achieved a Dice score of 0.900 on the CRAG dataset, while SAM-Path achieved 0.884. On the GlaS dataset, Med-SAM reported a Dice score of 0.956, whereas GB-SAM achieved 0.885 with significantly less training data. These results highlight GB-SAM\'s advanced segmentation capabilities and reduced dependency on large datasets, indicating its potential for practical deployment in digital pathology, particularly in settings with limited annotated datasets.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究旨在建立基于肺腺癌(LUAD)组织病理学特征的深度学习(DL)模型来预测其复发风险。收集了164例LUAD病例的临床病理数据和整个幻灯片图像,并使用ImageNet预先训练的efficientnet-b2架构来训练DL模型。densenet201和resnet152。对模型进行了训练,将每个图像块分为高风险或低风险组,案例级别的结果是通过多个实例学习与来自所有补丁的模型的最终FC层的特征确定的。分析基于模型的风险组的临床病理和遗传特征。为了预测复发,该模型的曲线下面积为0.763,灵敏度为0.750、0.633和0.680,特异性,和测试集的准确性,分别。模型预测的复发高危病例(HR组)与较短的无复发生存期和较高的分期显着相关(两者,p<0.001)。HR组与特定的组织病理学特征相关,如分化差的成分,复杂的腺体图案成分,肿瘤通过空气空间扩散,和更高的等级。在HR组,胸膜侵犯,坏死,淋巴入侵更频繁,入侵的规模更大(所有,p<0.001)。几个基因突变,包括TP53(p=0.007)突变,更常见于HR组。I-II期的结果与一般队列的结果相似。基于DL的模型可以通过分析组织病理学特征来预测LUAD的复发风险并确定TP53基因突变的存在。
    This study aimed to develop a deep learning (DL) model for predicting the recurrence risk of lung adenocarcinoma (LUAD) based on its histopathological features. Clinicopathological data and whole slide images from 164 LUAD cases were collected and used to train DL models with an ImageNet pre-trained efficientnet-b2 architecture, densenet201, and resnet152. The models were trained to classify each image patch into high-risk or low-risk groups, and the case-level result was determined by multiple instance learning with final FC layer\'s features from a model from all patches. Analysis of the clinicopathological and genetic characteristics of the model-based risk group was performed. For predicting recurrence, the model had an area under the curve score of 0.763 with 0.750, 0.633 and 0.680 of sensitivity, specificity, and accuracy in the test set, respectively. High-risk cases for recurrence predicted by the model (HR group) were significantly associated with shorter recurrence-free survival and a higher stage (both, p < 0.001). The HR group was associated with specific histopathological features such as poorly differentiated components, complex glandular pattern components, tumor spread through air spaces, and a higher grade. In the HR group, pleural invasion, necrosis, and lymphatic invasion were more frequent, and the size of the invasion was larger (all, p < 0.001). Several genetic mutations, including TP53 (p = 0.007) mutations, were more frequently found in the HR group. The results of stages I-II were similar to those of the general cohort. DL-based model can predict the recurrence risk of LUAD and identify the presence of the TP53 gene mutation by analyzing histopathologic features.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    一个巨大的全球健康问题是癌症,其中早期诊断和治疗已被证明是挽救生命的。这适用于口腔癌,从而强调及时干预的重要性。深度学习技术在早期癌症检测中获得了吸引力,在准确诊断中表现出有希望的结果。然而,收集大量的训练数据对癌症诊断中的深度学习模型提出了挑战。为了解决这个限制,这项研究提出了一种基于少量学习框架的口腔癌诊断方法,避免了对大量训练数据的需求.具体来说,采用原型网络构建诊断模型,其中利用两个特征提取器分别提取原型特征和查询特征,背离了在原型网络中常规使用单个特征提取功能。此外,为所提出的方法设计了一个定制的损失函数。使用组织病理学图像数据集的严格实验证明了我们提出的方法优于比较方法的性能。
    A large global health issue is cancer, wherein early diagnosis and treatment have proven to be life-saving. This holds true for oral cancer, thus emphasizing the significance of timely intervention. Deep learning techniques have gained traction in early cancer detection, exhibiting promising outcomes in accurate diagnosis. However, collecting a substantial amount of training data poses a challenge for deep learning models in cancer diagnosis. To address this limitation, this study proposes an oral cancer diagnosis approach based on a few-shot learning framework that circumvents the need for extensive training data. Specifically, a prototypical network is employed to construct a diagnostic model, wherein two feature extractors are utilized to extract prototypical features and query features respectively, departing from the conventional use of a single feature extraction function in prototypical networks. Moreover, a customized loss function is designed for the proposed method. Rigorous experimentation using a histopathological image dataset demonstrates the superior performance of our proposed approach over comparison methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在病理诊断中,使用不同方法全面分析连续切片染色图像中的相应区域是常见但重要的操作。为了帮助提高分析效率,为了匹配不同图像中的对应区域,提出了多种图像配准方法,但是他们的表现受到旋转的高度影响,变形,和系列病理图像之间的染色变化。在这项工作中,我们提出了一种具有染色变异性归一化的无方向环特征描述符,用于病理图像匹配。具体来说,我们将图像染色标准化为相似水平,以最大程度地减少染色差异对病理图像匹配的影响。为了克服旋转和变形问题,我们提出了一种旋转不变性无方向的环形特征描述符,该描述符从环形特征生成新颖的自适应bin以构建特征向量。我们通过测量特征向量的欧氏距离来评估关键点的相似性,从而实现病理图像匹配。共46对临床病理图像中的苏木精-伊红和免疫组织化学染色验证了我们方法的性能。实验结果表明,该方法满足病理图像匹配精度要求(误差为300μm),特别适用于临床实践中常见的大角度旋转病例。
    Comprehensively analyzing the corresponding regions in the images of serial slices stained using different methods is a common but important operation in pathological diagnosis. To help increase the efficiency of the analysis, various image registration methods are proposed to match the corresponding regions in different images, but their performance is highly influenced by the rotations, deformations, and variations of staining between the serial pathology images. In this work, we propose an orientation-free ring feature descriptor with stain-variability normalization for pathology image matching. Specifically, we normalize image staining to similar levels to minimize the impact of staining differences on pathology image matching. To overcome the rotation and deformation issues, we propose a rotation-invariance orientation-free ring feature descriptor that generates novel adaptive bins from ring features to build feature vectors. We measure the Euclidean distance of the feature vectors to evaluate keypoint similarity to achieve pathology image matching. A total of 46 pairs of clinical pathology images in hematoxylin-eosin and immunohistochemistry straining to verify the performance of our method. Experimental results indicate that our method meets the pathology image matching accuracy requirements (error ¡ 300μm), especially competent for large-angle rotation cases common in clinical practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    基因组技术的进步使人类疾病的大量小鼠模型得以发展,需要准确的表型来阐明遗传操作的后果。解剖病理学,小鼠表型管道的重要组成部分,理想情况下是由人类或兽医病理学家进行;然而,由于没有足够数量的病理学家有资格在形态学上评估这些小鼠模型,研究科学家可能会进行“自己动手”病理学,导致诊断错误。在生物医学文献中,病理学数据通常表现为组织切片的图像,用苏木精和伊红或通过免疫组织化学的抗体染色,伴随着一个图例。这些图像和图例中呈现的数据可能包含不准确。此外,对于非病理学家研究科学家,关于研究出版物中理想病理图像和图例所需的元素的指导有限。在本概述中,概述了理想病理图像和图例的组成部分,并包括图像质量,图像组成,和图像解释。背景知识对于产生准确的病理图像和在文献中批判性地评估这些图像是重要的。这些基础知识包括了解相关的人体和小鼠解剖学和组织学,对于癌症研究人员来说,对人类和小鼠肿瘤分类和形态学的理解,小鼠染色背景病变,和组织处理伪影。免疫组织化学的准确解释也是至关重要的,并详细强调组织控制和分布的要求,强度,和细胞内染色位置。概述了免疫组织化学解释中的常见陷阱,并提供问题清单,通过该清单可以对任何病理图像进行严格检查。鼓励与病理学家同事合作。本概述旨在使研究人员能够批判性地评估文献中病理图像的质量和准确性,以提高已发表病理数据的可靠性和可重复性。©2023作者。WileyPeriodicalsLLC出版的当前协议。
    Advances in genomic technologies have enabled the development of abundant mouse models of human disease, requiring accurate phenotyping to elucidate the consequences of genetic manipulation. Anatomic pathology, an important component of the mouse phenotyping pipeline, is ideally performed by human or veterinary pathologists; however, due to insufficient numbers of pathologists qualified to assess these mouse models morphologically, research scientists may perform \"do-it-yourself\" pathology, resulting in diagnostic error. In the biomedical literature, pathology data is commonly presented as images of tissue sections, stained with either hematoxylin and eosin or antibodies via immunohistochemistry, accompanied by a figure legend. Data presented in such images and figure legends may contain inaccuracies. Furthermore, there is limited guidance for non-pathologist research scientists concerning the elements required in an ideal pathology image and figure legend in a research publication. In this overview, the components of an ideal pathology image and figure legend are outlined and comprise image quality, image composition, and image interpretation. Background knowledge is important for producing accurate pathology images and critically assessing these images in the literature. This foundational knowledge includes understanding relevant human and mouse anatomy and histology and, for cancer researchers, an understanding of human and mouse tumor classification and morphology, mouse stain background lesions, and tissue processing artifacts. Accurate interpretation of immunohistochemistry is also vitally important and is detailed with emphasis on the requirement for tissue controls and the distribution, intensity, and intracellular location of staining. Common pitfalls in immunohistochemistry interpretation are outlined, and a checklist of questions is provided by which any pathology image may be critically examined. Collaboration with pathologist colleagues is encouraged. This overview aims to equip researchers to critically assess the quality and accuracy of pathology images in the literature to improve the reliability and reproducibility of published pathology data. © 2023 The Authors. Current Protocols published by Wiley Periodicals LLC.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目标:无监督域自适应(UDA)是解决领域差异并减少示例分割中费力且容易出错的像素级注释负担的强大方法。然而,以前的实例分段模型中使用的域自适应策略将所有标记/检测到的实例集中在一起,以训练实例级GAN鉴别器,忽略了多个实例类别之间的差异。这种池化防止了UDA实例分割模型学习源域和目标域之间的分类对应关系,以实现准确的实例分类;方法:为了应对这一挑战,我们提出了一种用于UDA多类实例分割的实例分割CycleGAN(ISC-GAN)算法。我们对多类核识别任务进行了广泛的实验,以将知识从苏木精和曙红转移到免疫组织化学染色的病理图像。具体来说,我们将CycleGAN与MaskR-CNN融合,以学习具有图像级域自适应和虚拟监督的分类对应关系。此外,我们利用课程学习将学习过程分为两个步骤:(1)仅对标记的源数据进行学习分割,(2)利用ISC-GAN生成的成对虚拟标签学习目标域分割。通过其他策略的实验,性能得到了进一步的提高,包括共享权重,知识蒸馏,和扩展的源数据。
    结果:与基线模型或三种UDA实例检测和分割模型相比,ISC-GAN展示了最先进的表演,平均准确率为39.1%,平均召回率为48.7%。ISC-GAN的源代码可在https://github.com/sdw95927/InstanceSegmentation-CycleGAN获得。
    结论:ISC-GAN使苏木精和曙红的知识适应于免疫组化染色的病理学图像,这表明,在深度学习和计算机视觉任务中,有可能减少对大型带注释的病理图像数据集的需求。
    OBJECTIVE: Unsupervised domain adaptation (UDA) is a powerful approach in tackling domain discrepancies and reducing the burden of laborious and error-prone pixel-level annotations for instance segmentation. However, the domain adaptation strategies utilized in previous instance segmentation models pool all the labeled/detected instances together to train the instance-level GAN discriminator, which neglects the differences among multiple instance categories. Such pooling prevents UDA instance segmentation models from learning categorical correspondence between source and target domains for accurate instance classification; METHODS: To tackle this challenge, we propose an Instance Segmentation CycleGAN (ISC-GAN) algorithm for UDA multiclass-instance segmentation. We conduct extensive experiments on the multiclass nuclei recognition task to transfer knowledge from hematoxylin and eosin to immunohistochemistry stained pathology images. Specifically, we fuse CycleGAN with Mask R-CNN to learn categorical correspondence with image-level domain adaptation and virtual supervision. Moreover, we utilize Curriculum Learning to separate the learning process into two steps: (1) learning segmentation only on labeled source data, and (2) learning target domain segmentation with paired virtual labels generated by ISC-GAN. The performance was further improved through experiments with other strategies, including Shared Weights, Knowledge Distillation, and Expanded Source Data.
    RESULTS: Comparing to the baseline model or the three UDA instance detection and segmentation models, ISC-GAN illustrates the state-of-the-art performance, with 39.1% average precision and 48.7% average recall. The source codes of ISC-GAN are available at https://github.com/sdw95927/InstanceSegmentation-CycleGAN.
    CONCLUSIONS: ISC-GAN adapted knowledge from hematoxylin and eosin to immunohistochemistry stained pathology images, suggesting the potential for reducing the need for large annotated pathological image datasets in deep learning and computer vision tasks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    肺癌是人类最多见的恶性肿瘤之一。这是非常致命的,由于其早期症状不明显。在临床医学中,医生依靠病理检查提供的信息作为许多疾病最终诊断的重要参考。因此,病理诊断被称为疾病诊断的金标准。然而,病理图像中包含的信息的复杂性和患者数量的增加远远超过病理学家的数量,特别是在欠发达国家治疗肺癌。为了解决这个问题,我们提出了即插即用视觉激活功能(AF),CroreLU,基于病理学的先验知识,这使得将深度学习模型用于精准医疗成为可能。据我们所知,这项工作是首次从AFs的角度优化病理图像诊断的深度学习模型。通过对神经网络的激活层采用独特的交叉窗口设计,CroReLU具有对空间信息进行建模并捕获肺癌的组织学形态特征的能力,例如乳头状,微乳头状,和管状肺泡。为了测试此设计的有效性,收集776张肺癌病理图像作为实验数据。当将CroReLU插入SeNet网络(SeNet_CroReLU)时,诊断准确率达到98.33%,明显优于现阶段常见的神经网络模型。在完全不同的数据分布和识别任务的LC25000数据集上验证了所提出方法的泛化能力,以满足实际临床需求。实验结果表明,CroReLU具有识别癌症病理图像中类别间和类别内差异的能力,识别精度超过了现有的复杂网络层设计研究工作。
    Lung cancer is one of the most common malignant tumors in human beings. It is highly fatal, as its early symptoms are not obvious. In clinical medicine, physicians rely on the information provided by pathology tests as an important reference for the final diagnosis of many diseases. Therefore, pathology diagnosis is known as the gold standard for disease diagnosis. However, the complexity of the information contained in pathology images and the increase in the number of patients far outpace the number of pathologists, especially for the treatment of lung cancer in less developed countries. To address this problem, we propose a plug-and-play visual activation function (AF), CroReLU, based on a priori knowledge of pathology, which makes it possible to use deep learning models for precision medicine. To the best of our knowledge, this work is the first to optimize deep learning models for pathology image diagnosis from the perspective of AFs. By adopting a unique crossover window design for the activation layer of the neural network, CroReLU is equipped with the ability to model spatial information and capture histological morphological features of lung cancer such as papillary, micropapillary, and tubular alveoli. To test the effectiveness of this design, 776 lung cancer pathology images were collected as experimental data. When CroReLU was inserted into the SeNet network (SeNet_CroReLU), the diagnostic accuracy reached 98.33%, which was significantly better than that of common neural network models at this stage. The generalization ability of the proposed method was validated on the LC25000 dataset with completely different data distribution and recognition tasks in the face of practical clinical needs. The experimental results show that CroReLU has the ability to recognize inter- and intra-class differences in cancer pathology images, and that the recognition accuracy exceeds the extant research work on the complex design of network layers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    肺腺癌(LUAD)患者的预后,尤其是早期LUAD,取决于临床病理特征。然而,它的预测效用是有限的。在这项研究中,我们使用多尺度病理图像开发并训练了基于深度卷积神经网络(CNN)的DeepRePath模型,以预测早期LUAD患者的预后.DeepRePath使用来自癌症基因组图谱的LUAD的1067苏木精和伊红染色的全载玻片图像进行预训练。使用来自I期和II期LUAD患者的393例切除的肺癌标本的两个单独的CNN和多尺度病理图像进一步训练和验证DeepRePath。在393名患者中,95例患者在手术切除后复发。DeepRePath模型显示队列I和队列II(外部验证集)的平均曲线下面积(AUC)评分为0.77和0.76,分别。由于性能低,DeepRePath不能用作临床环境中的自动化工具。当使用梯度加权类激活映射时,DeepRePath表明非典型核之间的关联,盘状肿瘤细胞,病理图像显示肿瘤坏死复发。尽管患者数量相对较少,但存在局限性,基于神经网络和迁移学习的DeepRePath模型可以使用多尺度病理图像预测早期LUAD根治性切除术后的复发。
    The prognosis of patients with lung adenocarcinoma (LUAD), especially early-stage LUAD, is dependent on clinicopathological features. However, its predictive utility is limited. In this study, we developed and trained a DeepRePath model based on a deep convolutional neural network (CNN) using multi-scale pathology images to predict the prognosis of patients with early-stage LUAD. DeepRePath was pre-trained with 1067 hematoxylin and eosin-stained whole-slide images of LUAD from the Cancer Genome Atlas. DeepRePath was further trained and validated using two separate CNNs and multi-scale pathology images of 393 resected lung cancer specimens from patients with stage I and II LUAD. Of the 393 patients, 95 patients developed recurrence after surgical resection. The DeepRePath model showed average area under the curve (AUC) scores of 0.77 and 0.76 in cohort I and cohort II (external validation set), respectively. Owing to low performance, DeepRePath cannot be used as an automated tool in a clinical setting. When gradient-weighted class activation mapping was used, DeepRePath indicated the association between atypical nuclei, discohesive tumor cells, and tumor necrosis in pathology images showing recurrence. Despite the limitations associated with a relatively small number of patients, the DeepRePath model based on CNNs with transfer learning could predict recurrence after the curative resection of early-stage LUAD using multi-scale pathology images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    The purpose of this study was to develop a computer-aided diagnosis (CAD) system for automatic classification of histopathological images of lung tissues. Two datasets (private and public datasets) were obtained and used for developing and validating CAD. The private dataset consists of 94 histopathological images that were obtained for the following five categories: normal, emphysema, atypical adenomatous hyperplasia, lepidic pattern of adenocarcinoma, and invasive adenocarcinoma. The public dataset consists of 15,000 histopathological images that were obtained for the following three categories: lung adenocarcinoma, lung squamous cell carcinoma, and benign lung tissue. These images were automatically classified using machine learning and two types of image feature extraction: conventional texture analysis (TA) and homology-based image processing (HI). Multiscale analysis was used in the image feature extraction, after which automatic classification was performed using the image features and eight machine learning algorithms. The multicategory accuracy of our CAD system was evaluated in the two datasets. In both the public and private datasets, the CAD system with HI was better than that with TA. It was possible to build an accurate CAD system for lung tissues. HI was more useful for the CAD systems than TA.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:不同类型细胞的空间分布可以揭示癌细胞的生长模式,它与肿瘤微环境和机体免疫反应的关系,所有这些都代表关键的“癌症标志”。然而,病理学家手动识别和定位病理载玻片中的所有细胞的过程是非常劳动密集和容易出错的。
    方法:在本研究中,我们开发了一个自动化的细胞类型分类管道,ConvPath,其中包括细胞核分割,基于卷积神经网络的肿瘤细胞,基质细胞,和淋巴细胞分类,并提取肺癌病理图像的肿瘤微环境相关特征。为了方便用户利用这个管道进行研究,ConvPath软件的所有源脚本均可在https://qbrc获得。swmed.教育/项目/cnn/。
    结果:训练和独立测试数据集的总体分类准确率分别为92.9%和90.1%,分别。通过识别细胞和分类细胞类型,该管道可以将病理图像转换为肿瘤的“空间图”,基质细胞和淋巴细胞。从这张空间地图上看,我们可以提取表征肿瘤微环境的特征。基于这些特征,我们建立了基于图像特征的预后模型,并在两个独立的队列中验证了该模型.预测的风险组是一个独立的预后因素,在调整了包括年龄在内的临床变量后,性别,吸烟状况,和舞台。
    结论:本研究中开发的分析管道可以将病理图像转换为肿瘤细胞的“空间图”,基质细胞和淋巴细胞。这可以极大地促进和增强对细胞空间组织的全面分析,以及它们在肿瘤进展和转移中的作用。
    BACKGROUND: The spatial distributions of different types of cells could reveal a cancer cell\'s growth pattern, its relationships with the tumor microenvironment and the immune response of the body, all of which represent key \"hallmarks of cancer\". However, the process by which pathologists manually recognize and localize all the cells in pathology slides is extremely labor intensive and error prone.
    METHODS: In this study, we developed an automated cell type classification pipeline, ConvPath, which includes nuclei segmentation, convolutional neural network-based tumor cell, stromal cell, and lymphocyte classification, and extraction of tumor microenvironment-related features for lung cancer pathology images. To facilitate users in leveraging this pipeline for their research, all source scripts for ConvPath software are available at https://qbrc.swmed.edu/projects/cnn/.
    RESULTS: The overall classification accuracy was 92.9% and 90.1% in training and independent testing datasets, respectively. By identifying cells and classifying cell types, this pipeline can convert a pathology image into a \"spatial map\" of tumor, stromal and lymphocyte cells. From this spatial map, we can extract features that characterize the tumor micro-environment. Based on these features, we developed an image feature-based prognostic model and validated the model in two independent cohorts. The predicted risk group serves as an independent prognostic factor, after adjusting for clinical variables that include age, gender, smoking status, and stage.
    CONCLUSIONS: The analysis pipeline developed in this study could convert the pathology image into a \"spatial map\" of tumor cells, stromal cells and lymphocytes. This could greatly facilitate and empower comprehensive analysis of the spatial organization of cells, as well as their roles in tumor progression and metastasis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号