deep-learning

深度学习
  • 文章类型: Journal Article
    蛋白质通过运动执行其生物学功能。尽管使用基于深度学习的方法对蛋白质的三维静态结构进行高通量预测已被证明是可行的。预测构象运动仍然是一个挑战。纯数据驱动的机器学习方法在解决此类运动方面遇到困难,因为有关构象运动的可用实验室数据仍然有限。在这项工作中,我们开发了一种通过将物理能量景观信息集成到基于深度学习的方法中来生成蛋白质变构运动的方法。我们展示了当地充满活力的挫败感,它代表了控制蛋白质变构动力学的能量景观的局部特征的量化,可用于使AlphaFold2(AF2)能够预测蛋白质构象运动。从基态静态结构开始,这种综合方法产生了蛋白质构象运动的替代结构和途径,在输入的多序列比对序列中使用能量挫折特征的渐进增强。对于一个模型蛋白腺苷酸激酶,我们表明,产生的构象运动与可用的实验和分子动力学模拟数据是一致的。将该方法应用于另外两种蛋白质KaiB和核糖结合蛋白,其中涉及大幅度的构象变化,也可以成功地产生替代构象。我们还展示了如何提取AF2能源景观地形的整体特征,许多人认为这是黑匣子。将物理知识结合到基于深度学习的结构预测算法中提供了一种有用的策略来解决变构蛋白的动态结构预测的挑战。
    Proteins perform their biological functions through motion. Although high throughput prediction of the three-dimensional static structures of proteins has proved feasible using deep-learning-based methods, predicting the conformational motions remains a challenge. Purely data-driven machine learning methods encounter difficulty for addressing such motions because available laboratory data on conformational motions are still limited. In this work, we develop a method for generating protein allosteric motions by integrating physical energy landscape information into deep-learning-based methods. We show that local energetic frustration, which represents a quantification of the local features of the energy landscape governing protein allosteric dynamics, can be utilized to empower AlphaFold2 (AF2) to predict protein conformational motions. Starting from ground state static structures, this integrative method generates alternative structures as well as pathways of protein conformational motions, using a progressive enhancement of the energetic frustration features in the input multiple sequence alignment sequences. For a model protein adenylate kinase, we show that the generated conformational motions are consistent with available experimental and molecular dynamics simulation data. Applying the method to another two proteins KaiB and ribose-binding protein, which involve large-amplitude conformational changes, can also successfully generate the alternative conformations. We also show how to extract overall features of the AF2 energy landscape topography, which has been considered by many to be black box. Incorporating physical knowledge into deep-learning-based structure prediction algorithms provides a useful strategy to address the challenges of dynamic structure prediction of allosteric proteins.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在单个机构临床应用中,使用两种不同的商用基于深度学习的自动分割(DLAS)工具,评估计算机断层扫描图像的头颈部区域中的危险器官(OAR)自动分割。
    根据已发布的40例临床头颈部癌(HNC)病例的计算机断层扫描(pCT)图像规划指南,临床医生对22例OAR进行了手动轮廓绘制。使用两个基于深度学习的自动分割模型ManteiaAccuContour和MIMProtégéAI为每位患者生成自动轮廓。然后使用Sørensen-Dice相似性系数(DSC)和平均距离(MD)指标将自动轮廓(AC)的准确性和完整性与专家轮廓(EC)进行比较。
    使用AccuContour生成22个OAR和使用ProtégéAI生成17个OAR的AC,平均轮廓生成时间分别为1分钟/患者和5分钟/患者。下颌骨的EC和AC一致性最高(DSC0.90±0.16)和(DSC0.91±0.03),AccuContour和ProtégéAI的chiasm(DSC0.28±0.14)和(DSC0.30±0.14)分别最低。使用AccuContour,22个OAR轮廓中有10个的平均MD<1mm,6OAR为1-2mm,6OAR为2-3mm。对于ProtégéAI,17个OAR中有8个的平均距离<1mm,6OAR为1-2mm,3OAR为2-3mm。
    两种DLAS程序都被证明是有价值的工具,可以显着减少在头颈部区域生成大量OAR轮廓所需的时间,即使在实施治疗计划之前可能需要手动编辑AC。获得的DSC和MD与评估各种其他DLAS解决方案的其他研究中报道的相类似。尽管如此,CT图像中具有非理想对比度的小体积结构,比如神经,非常具有挑战性,需要额外的解决方案才能取得足够的成果。
    UNASSIGNED: To evaluate organ at risk (OAR) auto-segmentation in the head and neck region of computed tomography images using two different commercially available deep-learning-based auto-segmentation (DLAS) tools in a single institutional clinical applications.
    UNASSIGNED: Twenty-two OARs were manually contoured by clinicians according to published guidelines on planning computed tomography (pCT) images for 40 clinical head and neck cancer (HNC) cases. Automatic contours were generated for each patient using two deep-learning-based auto-segmentation models-Manteia AccuContour and MIM ProtégéAI. The accuracy and integrity of autocontours (ACs) were then compared to expert contours (ECs) using the Sørensen-Dice similarity coefficient (DSC) and Mean Distance (MD) metrics.
    UNASSIGNED: ACs were generated for 22 OARs using AccuContour and 17 OARs using ProtégéAI with average contour generation time of 1 min/patient and 5 min/patient respectively. EC and AC agreement was highest for the mandible (DSC 0.90 ± 0.16) and (DSC 0.91 ± 0.03), and lowest for the chiasm (DSC 0.28 ± 0.14) and (DSC 0.30 ± 0.14) for AccuContour and ProtégéAI respectively. Using AccuContour, the average MD was<1mm for 10 of the 22 OARs contoured, 1-2mm for 6 OARs, and 2-3mm for 6 OARs. For ProtégéAI, the average mean distance was<1mm for 8 out of 17 OARs, 1-2mm for 6 OARs, and 2-3mm for 3 OARs.
    UNASSIGNED: Both DLAS programs were proven to be valuable tools to significantly reduce the time required to generate large amounts of OAR contours in the head and neck region, even though manual editing of ACs is likely needed prior to implementation into treatment planning. The DSCs and MDs achieved were similar to those reported in other studies that evaluated various other DLAS solutions. Still, small volume structures with nonideal contrast in CT images, such as nerves, are very challenging and will require additional solutions to achieve sufficient results.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    步态,一个人行走模式的表现,错综复杂地反映了各种身体系统的和谐相互作用,为个人的健康状况提供有价值的见解。然而,目前的研究在关节运动的时空依赖关系提取方面存在不足,导致病理性步态分类效率低下。在本文中,我们提出了一种频率金字塔图卷积网络(FP-GCN),提倡补充时间分析,进一步加强空间特征提取。具体来说,采用频谱分解分量提取不同时间帧的步态数据,这可以增强对人类步态中节奏模式和速度变化的检测,并允许对时间特征进行详细分析。此外,开发了一种新颖的金字塔特征提取方法来分析传感器间的依赖性,它可以整合来自不同途径的特征,增强时间和空间特征提取。我们在不同数据集上的实验证明了我们方法的有效性。值得注意的是,FP-GCN在公共数据集上达到98.78%的准确度,在专有数据上达到96.54%的准确度,超越现有的方法,并强调其推进病理步态分类的潜力。总之,我们创新的FP-GCN有助于推进特征提取和病理步态识别,这可能会在医疗保健规定方面提供潜在的进步,特别是在医疗资源有限的地区和家庭护理环境中。这项工作为进一步探索奠定了基础,并强调了远程健康监测的重要性,诊断,个性化干预。
    Gait, a manifestation of one\'s walking pattern, intricately reflects the harmonious interplay of various bodily systems, offering valuable insights into an individual\'s health status. However, the current study has shortcomings in the extraction of temporal and spatial dependencies in joint motion, resulting in inefficiencies in pathological gait classification. In this paper, we propose a Frequency Pyramid Graph Convolutional Network (FP-GCN), advocating to complement temporal analysis and further enhance spatial feature extraction. specifically, a spectral decomposition component is adopted to extract gait data with different time frames, which can enhance the detection of rhythmic patterns and velocity variations in human gait and allow a detailed analysis of the temporal features. Furthermore, a novel pyramidal feature extraction approach is developed to analyze the inter-sensor dependencies, which can integrate features from different pathways, enhancing both temporal and spatial feature extraction. Our experimentation on diverse datasets demonstrates the effectiveness of our approach. Notably, FP-GCN achieves an impressive accuracy of 98.78% on public datasets and 96.54% on proprietary data, surpassing existing methodologies and underscoring its potential for advancing pathological gait classification. In summary, our innovative FP-GCN contributes to advancing feature extraction and pathological gait recognition, which may offer potential advancements in healthcare provisions, especially in regions with limited access to medical resources and in home-care environments. This work lays the foundation for further exploration and underscores the importance of remote health monitoring, diagnosis, and personalized interventions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    皮肤恢复的评估是烧伤治疗的重要步骤。然而,常规方法仅观察皮肤表面,无法量化损伤量。光学相干断层扫描(OCT)是一种非侵入性,非接触,实时技术。扫描频率OCT使用近红外光,并分析不同深度的光回波强度,以从光学干涉信号生成图像。为了量化皮肤烧伤随时间的动态恢复,使用扫频OCT图像的深度学习评估了激光诱导的小鼠皮肤烧伤。在30只昆明小鼠中建立了激光诱导的小鼠皮肤热损伤模型,在激光照射后第0天、第1天、第3天、第7天和第14天获得小鼠皮肤的正常和烧伤区域的OCT图像。这产生了7000张正常和1400张烧伤B扫描图像,这些图像被分为训练,验证,正常数据和燃烧数据的8:1.5:0.5比率和8:1:1比率的测试集。正常图像是手动注释的,深度学习U-Net模型(用PSPNe和HRNet模型验证)将皮肤分为三层:真皮表皮层,皮下脂肪层,和肌肉层。对于刻录图像,模型被训练为只分割受损区域。然后使用三维重建技术重建受损组织并计算受损组织体积。正常组织层U-Net分割模型的平均IoU值和f得分分别为0.876和0.934。烧伤区域分割模型的IoU值达到0.907,f-得分值达到0.951。与手动标签相比,U-Net模型更快,皮肤分层的准确性更高。OCT和U-Net分割可以提供快速准确的组织变化分析和临床指导烧伤治疗。
    Evaluation of skin recovery is an important step in the treatment of burns. However, conventional methods only observe the surface of the skin and cannot quantify the injury volume. Optical coherence tomography (OCT) is a non-invasive, non-contact, real-time technique. Swept source OCT uses near infrared light and analyzes the intensity of light echo at different depths to generate images from optical interference signals. To quantify the dynamic recovery of skin burns over time, laser induced skin burns in mice were evaluated using deep learning of Swept source OCT images. A laser-induced mouse skin thermal injury model was established in thirty Kunming mice, and OCT images of normal and burned areas of mouse skin were acquired at day 0, day 1, day 3, day 7, and day 14 after laser irradiation. This resulted in 7000 normal and 1400 burn B-scan images which were divided into training, validation, and test sets at 8:1.5:0.5 ratio for the normal data and 8:1:1 for the burn data. Normal images were manually annotated, and the deep learning U-Net model (verified with PSPNe and HRNet models) was used to segment the skin into three layers: the dermal epidermal layer, subcutaneous fat layer, and muscle layer. For the burn images, the models were trained to segment just the damaged area. Three-dimensional reconstruction technology was then used to reconstruct the damaged tissue and calculate the damaged tissue volume. The average IoU value and f-score of the normal tissue layer U-Net segmentation model were 0.876 and 0.934 respectively. The IoU value of the burn area segmentation model reached 0.907 and f-score value reached 0.951. Compared with manual labeling, the U-Net model was faster with higher accuracy for skin stratification. OCT and U-Net segmentation can provide rapid and accurate analysis of tissue changes and clinical guidance in the treatment of burns.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    由于没有法拉第屏蔽效应的限制,超声对于通过金属屏障的无线信号传输是非常有效的。超声通道中的回声是执行高质量通信的最具挑战性的障碍之一,这通常通过使用信道均衡器或预失真滤波器来处理。在这项研究中,研究了一种称为双路径递归神经网络(DPRNN)的深度学习算法,用于超声通过金属通信系统中的回声消除。基于软件和硬件的结合构建了实际系统,由一对超声波换能器组成,FPGA模块,一些实验室制造的电路,等。DPRNN回波消除的方法应用于具有不同信噪比(SNR)的信号,传输速率为2Mbps,在所有情况下实现高于20dB的SNR改善。此外,这种方法成功地用于通过50毫米厚的铝板进行图像传输,表现出24.8dB的峰值信噪比(PSNR)和约95%的结构相似性指数度量(SSIM)。此外,与其他三种回声消除方法-LMS相比,RLS和PNLMS-DPRNN表现出更高的效率。所有这些结果都坚定地验证了DPRNN算法是进行回波消除和增强超声通过金属传输性能的强大工具。
    Ultrasound is extremely efficient for wireless signal transmission through metal barriers due to no limit of the Faraday shielding effect. Echoing in the ultrasonic channel is one of the most challenging obstacles to performing high-quality communication, which is generally coped with by using a channel equalizer or pre-distorting filter. In this study, a deep learning algorithm called a dual-path recurrent neural network (DPRNN) was investigated for echo cancellation in an ultrasonic through-metal communication system. The actual system was constructed based on the combination of software and hardware, consisting of a pair of ultrasonic transducers, an FPGA module, some lab-made circuits, etc. The approach of DPRNN echo cancellation was applied to signals with a different signal-to-noise ratio (SNR) at a 2 Mbps transmission rate, achieving higher than 20 dB SNR improvement for all situations. Furthermore, this approach was successfully used for image transmission through a 50 mm thick aluminum plate, exhibiting a 24.8 dB peak-signal-to-noise ratio (PSNR) and a about 95% structural similarity index measure (SSIM). Additionally, compared with three other echo cancellation methods-LMS, RLS and PNLMS-DPRNN has demonstrated higher efficiency. All those results firmly validate that the DPRNN algorithm is a powerful tool to conduct echo cancellation and enhance the performance of ultrasonic through-metal transmission.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:青少年特发性脊柱侧凸(AIS)需要准确的脊柱弯曲度评估以进行有效的临床治疗。传统的二维(2D)Cobb角测量已成为标准,但是三维(3D)自动测量技术的出现,例如那些使用负重3D成像(WR3D)的为提高AIS评估的准确性和全面性提供了机会。
    目的:本研究旨在比较传统的2DCobb角测量与利用WR3D成像技术在AIS患者中的3D自动测量。
    方法:招募了53名AIS患者,包括88条脊柱曲线,进行比较分析。
    方法:患者样本由53名诊断为AIS的个体组成。
    方法:使用常规2D方法和三种不同的3D方法计算Cobb角:分析方法(AM),平面相交法(PIM),和平面投影法(PPM)。
    方法:由3名经验丰富的临床医生用2D额全脊柱X线片手动测量2Dcobb角。对于3Dcobb角测量,使用U-net深度学习模型从WR3D图像中分割出脊柱和股骨头,并且使用3D切片机软件进行角度的自动计算。
    结果:发现AM和PIM估计值明显大于2D测量值。相反,PPM结果与2D方法相比没有统计学差异。这些发现在基于2DCobb角的亚组分析中是一致的。
    结论:每种3D测量方法都提供了对脊柱弯曲度的独特评估,PPM提供的值与2D测量非常相似,而AM和PIM产生更大的估计。利用WR3D技术以及深度学习分割可确保比较分析的准确性和效率。然而,额外的研究,特别是严重曲线的患者,需要验证和扩展这些结果。本研究强调了在评估AIS时,考虑到成像模式和临床背景,选择合适的测量方法的重要性。它还强调了需要不断改进这些技术,以便在临床决策和患者管理中得到最佳使用。
    BACKGROUND: Adolescent idiopathic scoliosis (AIS) necessitates accurate spinal curvature assessment for effective clinical management. Traditional two-dimensional (2D) Cobb angle measurements have been the standard, but the emergence of three-dimensional (3D) automatic measurement techniques, such as those using weight-bearing 3D imaging (WR3D), presents an opportunity to enhance the accuracy and comprehensiveness of AIS evaluation.
    OBJECTIVE: This study aimed to compare traditional 2D Cobb angle measurements with 3D automatic measurements utilizing the WR3D imaging technique in patients with AIS.
    METHODS: A cohort of 53 AIS patients was recruited, encompassing 88 spinal curves, for comparative analysis.
    METHODS: The patient sample consisted of 53 individuals diagnosed with AIS.
    METHODS: Cobb angles were calculated using the conventional 2D method and three different 3D methods: the Analytical Method (AM), the Plane Intersecting Method (PIM), and the Plane Projection Method (PPM).
    METHODS: The 2D cobb angle was manually measured by 3 experienced clinicians with 2D frontal whole-spine radiographs. For 3D cobb angle measurements, the spine and femoral heads were segmented from the WR3D images using a 3D-UNet deep-learning model, and the automatic calculations of the angles were performed with the 3D slicer software.
    RESULTS: AM and PIM estimates were found to be significantly larger than 2D measurements. Conversely, PPM results showed no statistical difference compared to the 2D method. These findings were consistent in a subgroup analysis based on 2D Cobb angles.
    CONCLUSIONS: Each 3D measurement method provides a unique assessment of spinal curvature, with PPM offering values closely resembling 2D measurements, while AM and PIM yield larger estimations. The utilization of WR3D technology alongside deep learning segmentation ensures accuracy and efficiency in comparative analyses. However, additional studies, particularly involving patients with severe curves, are required to validate and expand on these results. This study emphasizes the importance of selecting an appropriate measurement method considering the imaging modality and clinical context when assessing AIS, and it also underlines the need for continuous refinement of these techniques for optimal use in clinical decision-making and patient management.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Meta-Analysis
    目的:人工智能(AI)在非侵入性识别直肠癌患者在新辅助放化疗(nCRT)后可达到病理完全缓解(pCR)方面具有巨大潜力。我们旨在进行一项荟萃分析,以总结基于图像的AI模型在预测直肠癌患者pCR到nCRT的诊断性能。
    方法:本研究遵循系统评价和荟萃分析指南的首选报告项目。PubMed的文献检索,Embase,科克伦图书馆,和WebofScience从成立到2023年7月29日。包括开发或利用AI模型从医学图像预测直肠癌pCR到nCRT的研究。诊断准确性研究的质量评估-AI用于评估研究的方法学质量。使用双变量随机效应模型来总结个体敏感性,特殊性,和曲线下面积(AUC)。进行亚组和荟萃回归分析以确定异质性的潜在来源。本研究的方案在PROSPERO(CRD42022382374)注册。
    结果:确定了34项研究(9933例患者)。灵敏度的集合估计,特异性,用于pCR预测的AI模型的AUC为82%(95%CI:76-87%),84%(95%CI:79-88%),和90%(95%CI:87-92%),分别。亚洲人群的特异性更高,低偏见风险,和深度学习,与非亚洲人口相比,偏见的高风险,和影像组学(均P<0.05)。单中心的敏感性高于多中心(P=0.001)。与前瞻性设计相比,回顾性设计的敏感性较低(P=0.012),但特异性较高(P<0.001)。与非MRI相比,MRI显示出更高的敏感性(P=0.001),但特异性较低(P=0.044)。内部验证的敏感性和特异性均高于外部验证(P=0.005)。
    结论:基于图像的AI模型在预测直肠癌pCR到nCRT方面表现出良好的性能。然而,需要进一步的临床试验来验证这一发现.
    OBJECTIVE: Artificial intelligence (AI) holds enormous potential for noninvasively identifying patients with rectal cancer who could achieve pathological complete response (pCR) following neoadjuvant chemoradiotherapy (nCRT). We aimed to conduct a meta-analysis to summarize the diagnostic performance of image-based AI models for predicting pCR to nCRT in patients with rectal cancer.
    METHODS: This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A literature search of PubMed, Embase, Cochrane Library, and Web of Science was performed from inception to July 29, 2023. Studies that developed or utilized AI models for predicting pCR to nCRT in rectal cancer from medical images were included. The Quality Assessment of Diagnostic Accuracy Studies-AI was used to appraise the methodological quality of the studies. The bivariate random-effects model was used to summarize the individual sensitivities, specificities, and areas-under-the-curve (AUCs). Subgroup and meta-regression analyses were conducted to identify potential sources of heterogeneity. Protocol for this study was registered with PROSPERO (CRD42022382374).
    RESULTS: Thirty-four studies (9933 patients) were identified. Pooled estimates of sensitivity, specificity, and AUC of AI models for pCR prediction were 82% (95% CI: 76-87%), 84% (95% CI: 79-88%), and 90% (95% CI: 87-92%), respectively. Higher specificity was seen for the Asian population, low risk of bias, and deep-learning, compared with the non-Asian population, high risk of bias, and radiomics (all P < 0.05). Single-center had a higher sensitivity than multi-center (P = 0.001). The retrospective design had lower sensitivity (P = 0.012) but higher specificity (P < 0.001) than the prospective design. MRI showed higher sensitivity (P = 0.001) but lower specificity (P = 0.044) than non-MRI. The sensitivity and specificity of internal validation were higher than those of external validation (both P = 0.005).
    CONCLUSIONS: Image-based AI models exhibited favorable performance for predicting pCR to nCRT in rectal cancer. However, further clinical trials are warranted to verify the findings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    简介:精确分类在压力性损伤(PI)的治疗中具有重要作用,而当前基于机器学习或深度学习的PI分类方法仍然精度较低。方法:在本研究中,我们开发了一种基于深度学习的加权特征融合架构,用于细粒度分类,它结合了自上而下和自下而上的路径,以融合高级语义信息和低级细节表示。我们在我们建立的数据库中对其进行了验证,该数据库包含来自多中心临床队列的1,519张图像。ResNeXt被设置为骨干网络。结果:我们通过添加加权特征金字塔网络(wFPN)将阶段3PI的准确性从60.3%提高到76.2%。阶段1、2、4PI的准确度分别为0.870、0.788和0.845。我们发现总体准确性,精度,召回,我们网络的F1评分分别为0.815、0.808、0.816和0.811。受试者工作特征曲线下面积为0.940。结论:与目前报道的研究相比,我们的网络显著提高了总体准确率从75%提高到81.5%,并在预测每个阶段表现出出色的性能.经进一步验证,我们的研究将为我们的网络在PI管理中的临床应用铺平道路.
    Introduction: Precise classification has an important role in treatment of pressure injury (PI), while current machine-learning or deeplearning based methods of PI classification remain low accuracy. Methods: In this study, we developed a deeplearning based weighted feature fusion architecture for fine-grained classification, which combines a top-down and bottom-up pathway to fuse high-level semantic information and low-level detail representation. We validated it in our established database that consist of 1,519 images from multi-center clinical cohorts. ResNeXt was set as the backbone network. Results: We increased the accuracy of stage 3 PI from 60.3% to 76.2% by adding weighted feature pyramid network (wFPN). The accuracy for stage 1, 2, 4 PI were 0.870, 0.788, and 0.845 respectively. We found the overall accuracy, precision, recall, and F1-score of our network were 0.815, 0.808, 0.816, and 0.811 respectively. The area under the receiver operating characteristic curve was 0.940. Conclusions: Compared with current reported study, our network significantly increased the overall accuracy from 75% to 81.5% and showed great performance in predicting each stage. Upon further validation, our study will pave the path to the clinical application of our network in PI management.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:在放射治疗中,与计算机断层扫描(CT)扫描相比,磁共振(MR)成像对软组织具有更高的对比度,并且不发射辐射。然而,基于深度学习的自动危险器官(OAR)描绘算法的手动注释是昂贵的,使大型高质量注释数据集的收集成为挑战。因此,提出了一种基于小骨盆MR图像标注的低成本半监督OAR分割方法。
    方法:我们使用来自116名患者的116组MR图像训练了基于深度学习的分割模型。膀胱,股骨头,直肠,选择小肠作为OAR区。要生成训练集,我们使用了半监督方法和集成学习技术。此外,我们采用了后处理算法来纠正自注释数据。评估了2D和3D自动分割网络的性能。此外,我们评估了半监督方法对50个标记数据和只有10个标记数据的性能。
    结果:膀胱的骰子相似系数(DSC),股骨头,直肠和小肠之间的分割结果与参考掩模分别为0.954、0.984、0.908、0.852,仅采用自标注和后处理方法的二维分割模型。使用3D分割网络,相应OAR的DSC为0.871、0.975、0.975、0.783、0.724,0.896、0.984、0.890、0.828采用二维分割网络和普通监督方法。
    结论:我们的研究结果表明,可以使用小注释样本和其他未标记数据来训练多OAR分割模型。为了有效地注释数据集,采用集成学习和后处理方法。此外,当处理各向异性和有限的样本量时,2D模型在性能方面优于3D模型。
    OBJECTIVE: In radiotherapy, magnetic resonance (MR) imaging has higher contrast for soft tissues compared to computed tomography (CT) scanning and does not emit radiation. However, manual annotation of the deep learning-based automatic organ-at-risk (OAR) delineation algorithms is expensive, making the collection of large-high-quality annotated datasets a challenge. Therefore, we proposed the low-cost semi-supervised OAR segmentation method using small pelvic MR image annotations.
    METHODS: We trained a deep learning-based segmentation model using 116 sets of MR images from 116 patients. The bladder, femoral heads, rectum, and small intestine were selected as OAR regions. To generate the training set, we utilized a semi-supervised method and ensemble learning techniques. Additionally, we employed a post-processing algorithm to correct the self-annotation data. Both 2D and 3D auto-segmentation networks were evaluated for their performance. Furthermore, we evaluated the performance of semi-supervised method for 50 labeled data and only 10 labeled data.
    RESULTS: The Dice similarity coefficient (DSC) of the bladder, femoral heads, rectum and small intestine between segmentation results and reference masks is 0.954, 0.984, 0.908, 0.852 only using self-annotation and post-processing methods of 2D segmentation model. The DSC of corresponding OARs is 0.871, 0.975, 0.975, 0.783, 0.724 using 3D segmentation network, 0.896, 0.984, 0.890, 0.828 using 2D segmentation network and common supervised method.
    CONCLUSIONS: The outcomes of our study demonstrate that it is possible to train a multi-OAR segmentation model using small annotation samples and additional unlabeled data. To effectively annotate the dataset, ensemble learning and post-processing methods were employed. Additionally, when dealing with anisotropy and limited sample sizes, the 2D model outperformed the 3D model in terms of performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:黑体1(N1),黑质腹侧区最大的黑体区,通过长回波时间梯度回波MRI中的“N1符号”可识别。N1体征的缺失是帕金森病(PD)的重要诊断标志。然而,在临床实践中可视化和评估N1体征具有挑战性.
    目的:使用深度学习方法从真实磁敏感加权成像中自动检测N1体征的存在或不存在。
    方法:前瞻性。
    方法:453名受试者,包括225名PD患者,120个健康对照(HCs),和108名患有其他运动障碍的患者,前瞻性招募,包括227名男性和226名女性。他们分为训练,验证,以及289、73和91例的测试队列,分别。
    3T时的3D梯度回波SWI序列;3T时的3D多回波策略性采集梯度回波成像;3T时的NM敏感3D梯度回波序列,MTC脉冲。
    结果:一位具有5年经验的神经放射学家手动描绘了黑质区域。两名具有2年和36年经验的评估者在真实磁敏感加权成像(tSWI)上评估了N1体征,带高通滤波器的QSM,和与MTC数据相结合的幅度数据。我们提议NINet,神经模型,用于在tSWI图像中自动识别N1标志。
    方法:我们使用接收器工作特性分析将NINet的性能与主观参考标准进行了比较,和决策曲线分析评估识别准确性。
    结果:NINet在N1体征识别中获得了0.87(CI:0.76-0.89)的曲线下面积(AUC),超越其他模型和神经放射学家。NINet在tSWI图像中定位了推定的N1体征,准确率为67.3%。
    结论:我们提出的NINet模型能够确定N1信号的存在与否,随着它的本地化,在使用MR图像评估PD时,有望提高诊断准确性。
    方法:2技术效果:第一阶段。
    BACKGROUND: Nigrosome 1 (N1), the largest nigrosome region in the ventrolateral area of the substantia nigra pars compacta, is identifiable by the \"N1 sign\" in long echo time gradient echo MRI. The N1 sign\'s absence is a vital Parkinson\'s disease (PD) diagnostic marker. However, it is challenging to visualize and assess the N1 sign in clinical practice.
    OBJECTIVE: To automatically detect the presence or absence of the N1 sign from true susceptibility weighted imaging by using deep-learning method.
    METHODS: Prospective.
    METHODS: 453 subjects, including 225 PD patients, 120 healthy controls (HCs), and 108 patients with other movement disorders, were prospectively recruited including 227 males and 226 females. They were divided into training, validation, and test cohorts of 289, 73, and 91 cases, respectively.
    UNASSIGNED: 3D gradient echo SWI sequence at 3T; 3D multiecho strategically acquired gradient echo imaging at 3T; NM-sensitive 3D gradient echo sequence with MTC pulse at 3T.
    RESULTS: A neuroradiologist with 5 years of experience manually delineated substantia nigra regions. Two raters with 2 and 36 years of experience assessed the N1 sign on true susceptibility weighted imaging (tSWI), QSM with high-pass filter, and magnitude data combined with MTC data. We proposed NINet, a neural model, for automatic N1 sign identification in tSWI images.
    METHODS: We compared the performance of NINet to the subjective reference standard using Receiver Operating Characteristic analyses, and a decision curve analysis assessed identification accuracy.
    RESULTS: NINet achieved an area under the curve (AUC) of 0.87 (CI: 0.76-0.89) in N1 sign identification, surpassing other models and neuroradiologists. NINet localized the putative N1 sign within tSWI images with 67.3% accuracy.
    CONCLUSIONS: Our proposed NINet model\'s capability to determine the presence or absence of the N1 sign, along with its localization, holds promise for enhancing diagnostic accuracy when evaluating PD using MR images.
    METHODS: 2 TECHNICAL EFFICACY: Stage 1.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号