关键词: CycleGAN cross-modality learning deep learning digital pathology generative AI generative adversarial network image segmentation

来  源:   DOI:10.1016/j.modpat.2024.100591

Abstract:
Despite recent advances, the adoption of computer vision methods into clinical and commercial applications has been hampered by the limited availability of accurate ground truth tissue annotations required to train robust supervised models. Generating such ground truth can be accelerated by annotating tissue molecularly using immunofluorescence staining (IF) and mapping these annotations to a post-IF H&E (terminal H&E). Mapping the annotations between the IF and the terminal H&E increases both the scale and accuracy by which ground truth could be generated. However, discrepancies between terminal H&E and conventional H&E caused by IF tissue processing have limited this implementation. We sought to overcome this challenge and achieve compatibility between these parallel modalities using synthetic image generation, in which a cycle-consistent generative adversarial network (CycleGAN) was applied to transfer the appearance of conventional H&E such that it emulates the terminal H&E. These synthetic emulations allowed us to train a deep learning (DL) model for the segmentation of epithelium in the terminal H&E that could be validated against the IF staining of epithelial-based cytokeratins. The combination of this segmentation model with the CycleGAN stain transfer model enabled performative epithelium segmentation in conventional H&E images. The approach demonstrates that the training of accurate segmentation models for the breadth of conventional H&E data can be executed free of human-expert annotations by leveraging molecular annotation strategies such as IF, so long as the tissue impacts of the molecular annotation protocol are captured by generative models that can be deployed prior to the segmentation process.
摘要:
尽管最近取得了进展,计算机视觉方法在临床和商业应用中的应用受到了训练健壮的监督模型所需的精确的地面实况组织注释的有限可用性的阻碍。可以通过使用免疫荧光染色(IF)对组织进行分子注释并将这些注释映射到IFH&E(末端H&E)来加速生成这样的地面实况。在IF和终端H&E之间映射注释增加了可以生成地面实况的比例和准确度。然而,由IF组织处理引起的终端H&E与常规H&E之间的差异限制了这种实现。我们试图克服这一挑战,并使用合成图像生成实现这些并行模式之间的兼容性,其中应用了周期一致的生成对抗网络(CycleGAN)来传输常规H&E的外观,从而模拟终端H&E。这些合成仿真使我们能够训练用于终末H&E中上皮分割的深度学习(DL)模型,该模型可以针对基于上皮的细胞角蛋白的IF染色进行验证。该分割模型与CycleGAN染色转移模型的组合使得能够在常规H&E图像中进行上皮分割。该方法表明,通过利用分子注释策略(如IF,只要分子注释协议的组织影响由可以在分割过程之前部署的生成模型捕获。
公众号