generative adversarial network

生成对抗网络
  • 文章类型: Journal Article
    紫外可见(UV-Vis)吸收光谱,由于其高灵敏度和实时在线监测能力,是雨水管网外部水快速识别最有前途的工具之一。然而,获取实际样品的困难导致实际样品不足,废水成分复杂,影响雨水管网外部水的准确溯源分析。在这项研究中,提出了一种识别少量样本雨水管网外部水的新方法。在这种方法中,生成对抗网络(GAN)算法最初用于从水样的吸收光谱中生成光谱数据;随后,应用乘法散射校正(MSC)算法处理不同类型水样的紫外-可见吸收光谱,采用变分模态分解(VMD)算法对MSC后的光谱进行分解和重组;利用长短期记忆(LSTM)算法建立重组光谱与水源类型的识别模型,研究结果表明,当分解光谱数K为5时,对于不同来源的生活污水,地表水,工业废水最高,总体准确率为98.81%。此外,通过混合水样(雨水和生活污水的组合,雨水和地表水,以及雨水和工业废水)。结果表明,该方法识别雨水外部水源的准确率达到98.99%,检测时间在10s内。因此,所提出的方法可以成为雨水管网外部水的快速识别和可追溯性分析的潜在方法。
    Ultraviolet-visible (UV-Vis) absorption spectroscopy, due to its high sensitivity and capability for real-time online monitoring, is one of the most promising tools for the rapid identification of external water in rainwater pipe networks. However, difficulties in obtaining actual samples lead to insufficient real samples, and the complex composition of wastewater can affect the accurate traceability analysis of external water in rainwater pipe networks. In this study, a new method for identifying external water in rainwater pipe networks with a small number of samples is proposed. In this method, the Generative Adversarial Network (GAN) algorithm was initially used to generate spectral data from the absorption spectra of water samples; subsequently, the multiplicative scatter correction (MSC) algorithm was applied to process the UV-Vis absorption spectra of different types of water samples; following this, the Variational Mode Decomposition (VMD) algorithm was employed to decompose and recombine the spectra after MSC; and finally, the long short-term memory (LSTM) algorithm was used to establish the identification model between the recombined spectra and the water source types, and to determine the optimal number of decomposed spectra K. The research results show that when the number of decomposed spectra K is 5, the identification accuracy for different sources of domestic sewage, surface water, and industrial wastewater is the highest, with an overall accuracy of 98.81%. Additionally, the performance of this method was validated by mixed water samples (combinations of rainwater and domestic sewage, rainwater and surface water, and rainwater and industrial wastewater). The results indicate that the accuracy of the proposed method in identifying the source of external water in rainwater reaches 98.99%, with detection time within 10 s. Therefore, the proposed method can become a potential approach for rapid identification and traceability analysis of external water in rainwater pipe networks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    博弈论启发的深度学习使用生成对抗网络提供了一个竞争互动和实现目标的环境。在医学成像的背景下,大多数工作都集中在实现单一任务,如提高图像分辨率,分割图像,并校正运动伪影。我们开发了一个双目标对抗学习框架,该框架同时1)重建更高质量的脑磁共振图像(MRI),2)保留对预测轻度认知障碍(MCI)进展为阿尔茨海默病(AD)至关重要的疾病特异性成像特征。我们得到了3特斯拉,阿尔茨海默病神经影像学计划参与者的T1加权脑MRI(ADNI,N=342)和国家阿尔茨海默氏症协调中心(NACC,N=190)个数据集。我们通过从原始扫描中删除50%的矢状切片来模拟具有缺失数据的MRI(即,切块扫描)。使用切块扫描作为输入来训练生成器以重建脑MRI。我们在GAN架构中引入了一个分类器来区分稳定的(即,sMCI)和渐进式MCI(即,pMCI)基于生成的图像,以便在重建过程中对疾病相关信息进行编码。使用ADNI数据对框架进行了训练,并在NACC数据上进行了外部验证。在NACC队列中,生成的图像比切块扫描具有更好的图像质量(结构相似性(SSIM)指数:0.553±0.116vs.0.348±0.108).此外,利用生成图像的分类器比使用切块扫描更准确地区分pMCI和sMCI(F1分数:0.634±0.019对0.573±0.028).有竞争力的深度学习有可能促进那些有患阿尔茨海默病风险的人的面向疾病的图像重建。
    Game theory-inspired deep learning using a generative adversarial network provides an environment to competitively interact and accomplish a goal. In the context of medical imaging, most work has focused on achieving single tasks such as improving image resolution, segmenting images, and correcting motion artifacts. We developed a dual-objective adversarial learning framework that simultaneously 1) reconstructs higher quality brain magnetic resonance images (MRIs) that 2) retain disease-specific imaging features critical for predicting progression from mild cognitive impairment (MCI) to Alzheimer\'s disease (AD). We obtained 3-Tesla, T1-weighted brain MRIs of participants from the Alzheimer\'s Disease Neuroimaging Initiative (ADNI, N=342) and the National Alzheimer\'s Coordinating Center (NACC, N = 190) datasets. We simulated MRIs with missing data by removing 50% of sagittal slices from the original scans (i.e., diced scans). The generator was trained to reconstruct brain MRIs using the diced scans as input. We introduced a classifier into the GAN architecture to discriminate between stable (i.e., sMCI) and progressive MCI (i.e., pMCI) based on the generated images to facilitate encoding of disease-related information during reconstruction. The framework was trained using ADNI data and externally validated on NACC data. In the NACC cohort, generated images had better image quality than the diced scans (Structural similarity (SSIM) index: 0.553 ± 0.116 versus 0.348 ± 0.108). Furthermore, a classifier utilizing the generated images distinguished pMCI from sMCI more accurately than with the diced scans (F1-score: 0.634 ± 0.019 versus 0.573 ± 0.028). Competitive deep learning has potential to facilitate disease-oriented image reconstruction in those at risk of developing Alzheimer\'s disease.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    尽管最近取得了进展,计算机视觉方法在临床和商业应用中的应用受到了训练健壮的监督模型所需的精确的地面实况组织注释的有限可用性的阻碍。可以通过使用免疫荧光染色(IF)对组织进行分子注释并将这些注释映射到IFH&E(末端H&E)来加速生成这样的地面实况。在IF和终端H&E之间映射注释增加了可以生成地面实况的比例和准确度。然而,由IF组织处理引起的终端H&E与常规H&E之间的差异限制了这种实现。我们试图克服这一挑战,并使用合成图像生成实现这些并行模式之间的兼容性,其中应用了周期一致的生成对抗网络(CycleGAN)来传输常规H&E的外观,从而模拟终端H&E。这些合成仿真使我们能够训练用于终末H&E中上皮分割的深度学习(DL)模型,该模型可以针对基于上皮的细胞角蛋白的IF染色进行验证。该分割模型与CycleGAN染色转移模型的组合使得能够在常规H&E图像中进行上皮分割。该方法表明,通过利用分子注释策略(如IF,只要分子注释协议的组织影响由可以在分割过程之前部署的生成模型捕获。
    Despite recent advances, the adoption of computer vision methods into clinical and commercial applications has been hampered by the limited availability of accurate ground truth tissue annotations required to train robust supervised models. Generating such ground truth can be accelerated by annotating tissue molecularly using immunofluorescence staining (IF) and mapping these annotations to a post-IF H&E (terminal H&E). Mapping the annotations between the IF and the terminal H&E increases both the scale and accuracy by which ground truth could be generated. However, discrepancies between terminal H&E and conventional H&E caused by IF tissue processing have limited this implementation. We sought to overcome this challenge and achieve compatibility between these parallel modalities using synthetic image generation, in which a cycle-consistent generative adversarial network (CycleGAN) was applied to transfer the appearance of conventional H&E such that it emulates the terminal H&E. These synthetic emulations allowed us to train a deep learning (DL) model for the segmentation of epithelium in the terminal H&E that could be validated against the IF staining of epithelial-based cytokeratins. The combination of this segmentation model with the CycleGAN stain transfer model enabled performative epithelium segmentation in conventional H&E images. The approach demonstrates that the training of accurate segmentation models for the breadth of conventional H&E data can be executed free of human-expert annotations by leveraging molecular annotation strategies such as IF, so long as the tissue impacts of the molecular annotation protocol are captured by generative models that can be deployed prior to the segmentation process.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    医学成像数据集经常遇到数据不平衡问题,其中大多数像素对应于健康区域,少数人属于受影响地区。像素的这种不均匀分布加剧了与计算机辅助诊断相关的挑战。用不平衡数据训练的网络往往表现出对多数类的偏见,往往表现出高精度,但灵敏度低。
    我们设计了一种基于对抗学习的新网络,即条件对比生成对抗网络(CCGAN),以解决高度不平衡的MRI数据集中的类不平衡问题。所提出的模型有三个新的组成部分:(1)特定类别的关注,(2)区域再平衡模块(RRM)和监督对比学习网络(SCoLN)。特定于班级的注意力集中在输入表示的更具区别性的区域,捕获更多相关特征。RRM促进了跨输入表示的各个区域的特征的更平衡的分布,确保更公平的细分过程。CCGAN的生成器通过基于真负图和真正图接收来自SCoLN的反馈来学习像素级分割。此过程确保最终的语义分割不仅解决了不平衡的数据问题,而且还提高了分类准确性。
    所提出的模型在五个高度不平衡的医学图像分割数据集上显示了最先进的性能。因此,该模型在医学诊断中具有巨大的应用潜力,在数据分布高度不平衡的情况下。CCGAN在各种数据集上的骰子相似系数(DSC)得分最高:BUS2017为0.965±0.012,DDTI为0.896±0.091,对于LiTSMICCAI2017,为0.786±0.046,对于ATLAS数据集,为0.712±1.5,和0.877±1.2的BRATS2015数据集。DeepLab-V3紧随其后,BUS2017的DSC评分为0.948±0.010,DDTI的DSC评分为0.895±0.014,对于LiTSMICCAI2017,为0.763±0.044,对于ATLAS数据集,为0.696±1.1,和0.846±1.4的BRATS2015数据集。
    UNASSIGNED: Medical imaging datasets frequently encounter a data imbalance issue, where the majority of pixels correspond to healthy regions, and the minority belong to affected regions. This uneven distribution of pixels exacerbates the challenges associated with computer-aided diagnosis. The networks trained with imbalanced data tends to exhibit bias toward majority classes, often demonstrate high precision but low sensitivity.
    UNASSIGNED: We have designed a new network based on adversarial learning namely conditional contrastive generative adversarial network (CCGAN) to tackle the problem of class imbalancing in a highly imbalancing MRI dataset. The proposed model has three new components: (1) class-specific attention, (2) region rebalancing module (RRM) and supervised contrastive-based learning network (SCoLN). The class-specific attention focuses on more discriminative areas of the input representation, capturing more relevant features. The RRM promotes a more balanced distribution of features across various regions of the input representation, ensuring a more equitable segmentation process. The generator of the CCGAN learns pixel-level segmentation by receiving feedback from the SCoLN based on the true negative and true positive maps. This process ensures that final semantic segmentation not only addresses imbalanced data issues but also enhances classification accuracy.
    UNASSIGNED: The proposed model has shown state-of-art-performance on five highly imbalance medical image segmentation datasets. Therefore, the suggested model holds significant potential for application in medical diagnosis, in cases characterized by highly imbalanced data distributions. The CCGAN achieved the highest scores in terms of dice similarity coefficient (DSC) on various datasets: 0.965 ± 0.012 for BUS2017, 0.896 ± 0.091 for DDTI, 0.786 ± 0.046 for LiTS MICCAI 2017, 0.712 ± 1.5 for the ATLAS dataset, and 0.877 ± 1.2 for the BRATS 2015 dataset. DeepLab-V3 follows closely, securing the second-best position with DSC scores of 0.948 ± 0.010 for BUS2017, 0.895 ± 0.014 for DDTI, 0.763 ± 0.044 for LiTS MICCAI 2017, 0.696 ± 1.1 for the ATLAS dataset, and 0.846 ± 1.4 for the BRATS 2015 dataset.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在保留真实感和面部特征的同时将光学面部图像转换为草图提出了重大挑战。当前依赖于配对训练数据的方法是昂贵且资源密集的。此外,他们往往无法捕捉到复杂的面部特征,导致生成不合格的草图。为了应对这些挑战,我们提出了新颖的分层对比生成对抗网络(HCGAN)。首先,HCGAN由一个全局草图综合模块和一个局部草图细化模块组成,该模块生成具有定义明确的全局特征的草图,该模块增强了在关键区域中提取特征的能力。其次,我们基于局部草图细化模块引入局部细化损失,细化草图在颗粒水平。最后,我们提出了一个名为\\\\\\\\\\\\\\\\\\\\\两个模块之间的局部一致性损失的关联策略,以确保HCGAN得到有效的优化。对CUFS和SKSF-A数据集的评估表明,我们的方法产生高质量的草图,并在保真度和真实性方面优于现有的最先进的方法。与目前最先进的方法相比,HCGAN在CUFS的三个数据集上将FID减少12.6941、4.9124和9.0316,分别,在SKSF-A数据集上的7.4679。此外,它获得了内容保真度(CF)的最佳分数,全球效应(GE),和局部模式(LP)。提出的HCGAN模型为不成对数据训练下的实际草图合成提供了有希望的解决方案。
    Transforming optical facial images into sketches while preserving realism and facial features poses a significant challenge. The current methods that rely on paired training data are costly and resource-intensive. Furthermore, they often fail to capture the intricate features of faces, resulting in substandard sketch generation. To address these challenges, we propose the novel hierarchical contrast generative adversarial network (HCGAN). Firstly, HCGAN consists of a global sketch synthesis module that generates sketches with well-defined global features and a local sketch refinement module that enhances the ability to extract features in critical areas. Secondly, we introduce local refinement loss based on the local sketch refinement module, refining sketches at a granular level. Finally, we propose an association strategy called \"warmup-epoch\" and local consistency loss between the two modules to ensure HCGAN is effectively optimized. Evaluations of the CUFS and SKSF-A datasets demonstrate that our method produces high-quality sketches and outperforms existing state-of-the-art methods in terms of fidelity and realism. Compared to the current state-of-the-art methods, HCGAN reduces FID by 12.6941, 4.9124, and 9.0316 on three datasets of CUFS, respectively, and by 7.4679 on the SKSF-A dataset. Additionally, it obtained optimal scores for content fidelity (CF), global effects (GE), and local patterns (LP). The proposed HCGAN model provides a promising solution for realistic sketch synthesis under unpaired data training.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:术后甲状旁腺功能减退是甲状腺切除术的主要并发症,当甲状旁腺在手术过程中意外受损时发生。尽管术中图像由于其复杂的性质而很少用于训练人工智能(AI),可以训练AI使用各种增强方法在术中检测甲状旁腺。这项研究的目的是训练一种有效的AI模型来检测甲状腺切除术期间的甲状旁腺。
    方法:在甲状腺叶切除术过程中收集甲状旁腺的视频剪辑。确认的甲状旁腺图像用于根据增强状态训练三种类型的数据集:基线,几何变换,以及基于生成对抗网络的图像修复。主要结果是AI检测甲状旁腺的平均精确度。
    结果:152从150例接受单侧肺叶切除术的患者中获得了细针抽吸证实的甲状旁腺图像。基于基线数据的AI模型检测甲状旁腺的平均精确度为77%。通过应用几何变换和图像修补增强方法来增强这种性能,几何变换数据增强数据集显示出比图像修复模型(78.6%)更高的平均精度(79%)。当使用完全不同的甲状腺切除术方法对该模型进行外部验证时,图像修补方法(46%)比几何变换(37%)和基线(33%)方法更有效.
    结论:发现该AI模型是甲状腺切除术中甲状旁腺识别的有效且可推广的工具,特别是在适当的增强方法的帮助下。比较模型性能和外科医生识别的其他研究,然而,需要评估这个人工智能模型的真正临床相关性。
    BACKGROUND: Postoperative hypoparathyroidism is a major complication of thyroidectomy, occurring when the parathyroid glands are inadvertently damaged during surgery. Although intraoperative images are rarely used to train artificial intelligence (AI) because of its complex nature, AI may be trained to intraoperatively detect parathyroid glands using various augmentation methods. The purpose of this study was to train an effective AI model to detect parathyroid glands during thyroidectomy.
    METHODS: Video clips of the parathyroid gland were collected during thyroid lobectomy procedures. Confirmed parathyroid images were used to train three types of datasets according to augmentation status: baseline, geometric transformation, and generative adversarial network-based image inpainting. The primary outcome was the average precision of the performance of AI in detecting parathyroid glands.
    RESULTS: 152 Fine-needle aspiration-confirmed parathyroid gland images were acquired from 150 patients who underwent unilateral lobectomy. The average precision of the AI model in detecting parathyroid glands based on baseline data was 77%. This performance was enhanced by applying both geometric transformation and image inpainting augmentation methods, with the geometric transformation data augmentation dataset showing a higher average precision (79%) than the image inpainting model (78.6%). When this model was subjected to external validation using a completely different thyroidectomy approach, the image inpainting method was more effective (46%) than both the geometric transformation (37%) and baseline (33%) methods.
    CONCLUSIONS: This AI model was found to be an effective and generalizable tool in the intraoperative identification of parathyroid glands during thyroidectomy, especially when aided by appropriate augmentation methods. Additional studies comparing model performance and surgeon identification, however, are needed to assess the true clinical relevance of this AI model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:在全球范围内,精神障碍已被列为造成负担的十大常见原因之一。生成人工智能(GAI)已经成为一种有前途和创新的技术进步,在精神卫生保健领域具有巨大的潜力。然而,缺乏专门研究和了解GAI在该领域内的应用前景的研究。
    目的:本综述旨在通过整合相关文献,了解GAI知识的现状,并确定其在心理健康领域的关键用途。
    方法:在包括WebofScience在内的8个知名来源中搜索了记录,PubMed,IEEEXplore,medRxiv,bioRxiv,谷歌学者,2013年至2023年的CNKI和万方数据库。我们的重点是原创,使用GAI技术有益于心理健康的英文或中文出版物进行实证研究。为了进行详尽的搜索,我们还检查了相关文献引用的研究。两名审查人员负责数据选择过程,根据所使用的GAI方法(传统检索和基于规则的技术与先进的GAI技术),对所有提取的数据进行了综合和总结,以进行简短深入的分析。
    结果:在对144篇文章的评论中,44(30.6%)符合详细分析的纳入标准。出现了高级GAI的六个关键用途:精神障碍检测,咨询支持,治疗应用,临床培训,临床决策支持,和目标驱动的优化。先进的GAI系统主要集中在治疗应用(n=19,43%)和咨询支持(n=13,30%),临床培训是最不常见的。大多数研究(n=28,64%)广泛关注心理健康,而特定条件如焦虑(n=1,2%),双相情感障碍(n=2,5%),饮食失调(n=1,2%),创伤后应激障碍(n=2,5%),精神分裂症(n=1,2%)受到的关注有限。尽管普遍使用,ChatGPT在检测精神障碍方面的功效仍然不足.此外,发现了100篇关于传统GAI方法的文章,表明先进的GAI可以增强精神卫生保健的不同领域。
    结论:本研究全面概述了GAI在精神保健中的应用,作为未来研究的宝贵指南,实际应用,以及这一领域的政策制定。虽然GAI在加强精神卫生保健服务方面表现出了希望,其固有的局限性强调了其作为补充工具的作用,而不是替代训练有素的心理健康提供者。有必要对GAI技术进行认真和道德的整合,确保采取平衡的方法,最大限度地提高利益,同时减轻精神卫生保健实践中的潜在挑战。
    BACKGROUND: Mental disorders have ranked among the top 10 prevalent causes of burden on a global scale. Generative artificial intelligence (GAI) has emerged as a promising and innovative technological advancement that has significant potential in the field of mental health care. Nevertheless, there is a scarcity of research dedicated to examining and understanding the application landscape of GAI within this domain.
    OBJECTIVE: This review aims to inform the current state of GAI knowledge and identify its key uses in the mental health domain by consolidating relevant literature.
    METHODS: Records were searched within 8 reputable sources including Web of Science, PubMed, IEEE Xplore, medRxiv, bioRxiv, Google Scholar, CNKI and Wanfang databases between 2013 and 2023. Our focus was on original, empirical research with either English or Chinese publications that use GAI technologies to benefit mental health. For an exhaustive search, we also checked the studies cited by relevant literature. Two reviewers were responsible for the data selection process, and all the extracted data were synthesized and summarized for brief and in-depth analyses depending on the GAI approaches used (traditional retrieval and rule-based techniques vs advanced GAI techniques).
    RESULTS: In this review of 144 articles, 44 (30.6%) met the inclusion criteria for detailed analysis. Six key uses of advanced GAI emerged: mental disorder detection, counseling support, therapeutic application, clinical training, clinical decision-making support, and goal-driven optimization. Advanced GAI systems have been mainly focused on therapeutic applications (n=19, 43%) and counseling support (n=13, 30%), with clinical training being the least common. Most studies (n=28, 64%) focused broadly on mental health, while specific conditions such as anxiety (n=1, 2%), bipolar disorder (n=2, 5%), eating disorders (n=1, 2%), posttraumatic stress disorder (n=2, 5%), and schizophrenia (n=1, 2%) received limited attention. Despite prevalent use, the efficacy of ChatGPT in the detection of mental disorders remains insufficient. In addition, 100 articles on traditional GAI approaches were found, indicating diverse areas where advanced GAI could enhance mental health care.
    CONCLUSIONS: This study provides a comprehensive overview of the use of GAI in mental health care, which serves as a valuable guide for future research, practical applications, and policy development in this domain. While GAI demonstrates promise in augmenting mental health care services, its inherent limitations emphasize its role as a supplementary tool rather than a replacement for trained mental health providers. A conscientious and ethical integration of GAI techniques is necessary, ensuring a balanced approach that maximizes benefits while mitigating potential challenges in mental health care practices.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    强对流降水的临近预报要求很高,并提出了重大挑战,因为它为不同的社会经济部门提供气象服务,以防止伴随强对流降水的灾难性天气事件造成巨大的经济损失和人员伤亡。随着双极化雷达数据的积累,基于数据的深度学习模型在降水临近预报中得到了广泛的应用。深度学习模型在临近预报方法中表现出一定的局限性:进化方法在整个迭代过程中容易积累误差(其中多个自回归模型生成未来运动场和强度残差,然后隐式迭代以产生预测)。自回归模型的“回归到平均值”问题导致了“模糊”现象。演化方法的生成器是一个两阶段模型:在初始阶段,生成器采用演化方法生成临时预测数据;在后续阶段,生成器重新处理临时预测数据。虽然进化方法的生成器是一个生成对抗网络,该模型采用的对抗策略忽略了临时预测数据的重要性。因此,本研究提出了一种对抗性自回归网络(AANet):首先,预测数据通过两阶段生成器生成(其中FURENet直接生成临时预测数据,和语义综合模型重新处理临时预测数据);随后,利用结构相似性损失(SSIM损失)来减轻“回归到平均值”问题的影响;最后,采用两阶段对抗(Tadv)策略来帮助两阶段生成器生成更真实且高度相似的生成数据。已通过实验验证,AANet在接下来的1小时的即时广播中优于NowcastNet,归一化误差(NE)减少0.0763,均方根误差(RMSE)为0.377,误报率(FAR)为4.2%,以及峰值信噪比(PSNR)1.45的增强,0.0208英寸SSIM,关键成功指数(CSI)为5.78%,检测概率(POD)为6.25%,F1为5.7%。
    The nowcasting of strong convective precipitation is highly demanded and presents significant challenges, as it offers meteorological services to diverse socio-economic sectors to prevent catastrophic weather events accompanied by strong convective precipitation from causing substantial economic losses and human casualties. With the accumulation of dual-polarization radar data, deep learning models based on data have been widely applied in the nowcasting of precipitation. Deep learning models exhibit certain limitations in the nowcasting approach: The evolutionary method is prone to accumulate errors throughout the iterative process (where multiple autoregressive models generate future motion fields and intensity residuals and then implicitly iterate to yield predictions), and the \"regression to average\" issue of autoregressive model leads to the \"blurring\" phenomenon. The evolution method\'s generator is a two-stage model: In the initial stage, the generator employs the evolution method to generate the provisional forecasted data; in the subsequent stage, the generator reprocesses the provisional forecasted data. Although the evolution method\'s generator is a generative adversarial network, the adversarial strategy adopted by this model ignores the significance of temporary prediction data. Therefore, this study proposes an Adversarial Autoregressive Network (AANet): Firstly, the forecasted data are generated via the two-stage generators (where FURENet directly produces the provisional forecasted data, and the Semantic Synthesis Model reprocesses the provisional forecasted data); Subsequently, structural similarity loss (SSIM loss) is utilized to mitigate the influence of the \"regression to average\" issue; Finally, the two-stage adversarial (Tadv) strategy is adopted to assist the two-stage generators to generate more realistic and highly similar generated data. It has been experimentally verified that AANet outperforms NowcastNet in the nowcasting of the next 1 h, with a reduction of 0.0763 in normalized error (NE), 0.377 in root mean square error (RMSE), and 4.2% in false alarm rate (FAR), as well as an enhancement of 1.45 in peak signal-to-noise ratio (PSNR), 0.0208 in SSIM, 5.78% in critical success index (CSI), 6.25% in probability of detection (POD), and 5.7% in F1.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    苏木素和伊红(H&E)染色是诊断神经胶质瘤的关键技术,允许直接观察组织结构。然而,H&E染色工作流程需要复杂的处理,专门的实验室基础设施,和专业病理学家,渲染它昂贵,劳动密集型,而且耗时。鉴于这些考虑,我们将深度学习方法和高光谱成像技术相结合,旨在将高光谱图像准确快速地转换为虚拟H&E染色图像。该方法通过捕获不同波长的组织信息,克服了H&E染色的局限性,提供全面和详细的组织组成信息作为现实的H&E染色。与各种发电机结构相比,Unet表现出巨大的整体优势,平均结构相似性指数度量(SSIM)为0.7731,峰值信噪比(PSNR)为23.3120,以及最短的训练和推断时间。一个全面的虚拟H&E染色软件系统,集成了CCD控制,显微镜控制,和虚拟H&E染色技术,是为了促进快速术中成像,促进疾病诊断,加快医疗自动化的发展。该平台以3.81mm2/s的高速度重建神经胶质瘤的大规模虚拟H&E染色图像。这种创新的方法将为小说铺平道路,组织学染色的加快路线。
    Hematoxylin and eosin (H&E) staining is a crucial technique for diagnosing glioma, allowing direct observation of tissue structures. However, the H&E staining workflow necessitates intricate processing, specialized laboratory infrastructures, and specialist pathologists, rendering it expensive, labor-intensive, and time-consuming. In view of these considerations, we combine the deep learning method and hyperspectral imaging technique, aiming at accurately and rapidly converting the hyperspectral images into virtual H&E staining images. The method overcomes the limitations of H&E staining by capturing tissue information at different wavelengths, providing comprehensive and detailed tissue composition information as the realistic H&E staining. In comparison with various generator structures, the Unet exhibits substantial overall advantages, as evidenced by a mean structure similarity index measure (SSIM) of 0.7731 and a peak signal-to-noise ratio (PSNR) of 23.3120, as well as the shortest training and inference time. A comprehensive software system for virtual H&E staining, which integrates CCD control, microscope control, and virtual H&E staining technology, is developed to facilitate fast intraoperative imaging, promote disease diagnosis, and accelerate the development of medical automation. The platform reconstructs large-scale virtual H&E staining images of gliomas at a high speed of 3.81 mm2/s. This innovative approach will pave the way for a novel, expedited route in histological staining.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    尽管在个人健康管理中越来越多地采用可穿戴式光电体积描记术(PPG)设备,由于易受噪声影响,它们的测量精度仍然有限。本文提出了一种使用生成对抗网络的新颖信号完成技术,该技术可确保全局和局部一致性。我们的方法创新性地解决了短期和长期PPG变化,以恢复波形,同时保持脉冲内和脉冲之间的波形一致性。我们通过从分段PPG波形中删除多达50%的段并比较原始波形和重建波形来评估我们的模型,包括收缩压峰值信息。结果表明,我们的方法准确地重建波形具有高保真度,在重建的边界上产生自然和无缝的过渡而没有不连续性。此外,重建的波形以最小的失真保留典型的PPG形状,强调我们技术的有效性和新颖性。
    Despite the growing adoption of wearable photoplethysmography (PPG) devices in personal health management, their measurement accuracy remains limited due to susceptibility to noise. This paper proposes a novel signal completion technique using generative adversarial networks that ensures both global and local consistency. Our approach innovatively addresses both short- and long-term PPG variations to restore waveforms while maintaining waveform consistency within and between pulses. We evaluated our model by removing up to 50 % of segments from segmented PPG waveforms and comparing the original and reconstructed waveforms, including systolic peak information. The results demonstrate that our method accurately reconstructs waveforms with high fidelity, producing natural and seamless transitions without discontinuities at reconstructed boundaries. Additionally, the reconstructed waveforms preserve typical PPG shapes with minimal distortion, underscoring the effectiveness and novelty of our technique.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号