Radiographic Image Interpretation, Computer-Assisted

射线照相图像解释,计算机辅助
  • 文章类型: Journal Article
    目的:检查深度学习重建之间的图像质量是否存在显着差异(DLR[AiCE,高级智能Clear-IQ引擎])和混合迭代重建(HIR[AIDR3D,自适应迭代剂量减少三维])在下肢间接计算机断层扫描静脉造影(CTV)的常规增强和CE-boost(对比度增强-boost)图像上的算法。
    方法:在这项回顾性研究中,纳入了2021年6月至2022年10月接受CTV评估深静脉血栓形成和静脉曲张的70例患者.针对AIDR3D和AiCE重建了未增强和增强的图像,使用减法软件获得AIDR3D-boost和AiCE-boost图像。客观和主观的图像质量进行了评估,并记录辐射剂量。
    结果:下腔静脉(IVC)的CT值,股静脉(FV),CE增强图像中的and静脉(PV)比增强图像高约1.3(1.31-1.36)倍。IVC的平均CT值没有显着差异,FV,和AIDR3D和AiCE之间的PV,AIDR3D-boost和AiCE-boost图像。AiCE中的噪音,AiCE-boost图像显著低于AIDR3D和AIDR3D-boost图像(P<0.05)。SNR(信噪比),CNR(对比噪声比),AiCE-boost图像的主观得分在4组中最高,超越AiCE,AIDR3D,和AIDR3D增强图像(均P<0.05)。
    结论:在下肢图像的间接CTV中,采用CE-boost技术的DLR可以降低图像噪声,提高CT值,SNR,CNR,和主观图像得分。AiCE-boost图像获得了最高的主观图像质量评分,并且更容易被放射科医师接受。
    OBJECTIVE: To examine whether there is a significant difference in image quality between the deep learning reconstruction (DLR [AiCE, Advanced Intelligent Clear-IQ Engine]) and hybrid iterative reconstruction (HIR [AIDR 3D, adaptive iterative dose reduction three dimensional]) algorithms on the conventional enhanced and CE-boost (contrast-enhancement-boost) images of indirect computed tomography venography (CTV) of lower extremities.
    METHODS: In this retrospective study, seventy patients who underwent CTV from June 2021 to October 2022 to assess deep vein thrombosis and varicose veins were included. Unenhanced and enhanced images were reconstructed for AIDR 3D and AiCE, AIDR 3D-boost and AiCE-boost images were obtained using subtraction software. Objective and subjective image qualities were assessed, and radiation doses were recorded.
    RESULTS: The CT values of the inferior vena cava (IVC), femoral vein ( FV), and popliteal vein (PV) in the CE-boost images were approximately 1.3 (1.31-1.36) times higher than in those of the enhanced images. There were no significant differences in mean CT values of IVC, FV, and PV between AIDR 3D and AiCE, AIDR 3D-boost and AiCE-boost images. Noise in AiCE, AiCE-boost images was significantly lower than in AIDR 3D and AIDR 3D-boost images ( P < 0.05). The SNR (signal-to-noise ratio), CNR (contrast-to-noise ratio), and subjective scores of AiCE-boost images were the highest among 4 groups, surpassing AiCE, AIDR 3D, and AIDR 3D-boost images (all P < 0.05).
    CONCLUSIONS: In indirect CTV of the lower extremities images, DLR with the CE-boost technique could decrease the image noise and improve the CT values, SNR, CNR, and subjective image scores. AiCE-boost images received the highest subjective image quality score and were more readily accepted by radiologists.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:尘肺由于其难以分期诊断和不良预后而对患者生存质量产生重大影响。本研究旨在使用尘肺患者的X射线胸片,基于多阶段联合深度学习方法,开发一种用于尘肺筛查和分期的计算机辅助诊断系统。
    方法:在本研究中,从华西第四医院放射科获得了498张医学胸片。数据集以4:1的比例随机分为训练集和测试集。在图像增强的直方图均衡化之后,使用U-Net模型对图像进行分割,并使用卷积神经网络分类模型预测分期。我们首先使用高效网络进行多分类分期诊断,但结果显示I/II期尘肺难以诊断。因此,基于临床实践,我们继续使用Res-Net34多阶段联合方法改进模型。
    结果:在收集的498例病例中,使用Efficient-Net的分类模型获得了83%的准确率,二次加权Kappa(QWK)得分为0.889.使用Res-Net34的多阶段联合方法的分类模型实现了89%的准确度,曲线下面积(AUC)为0.98,高QWK评分为0.94。
    结论:在这项研究中,通过创新的多阶段组合方法,尘肺分期的诊断准确性显着提高,为尘肺的临床应用和筛查提供参考。
    BACKGROUND: Pneumoconiosis has a significant impact on the quality of patient survival due to its difficult staging diagnosis and poor prognosis. This study aimed to develop a computer-aided diagnostic system for the screening and staging of pneumoconiosis based on a multi-stage joint deep learning approach using X-ray chest radiographs of pneumoconiosis patients.
    METHODS: In this study, a total of 498 medical chest radiographs were obtained from the Department of Radiology of West China Fourth Hospital. The dataset was randomly divided into a training set and a test set at a ratio of 4:1. Following histogram equalization for image enhancement, the images were segmented using the U-Net model, and staging was predicted using a convolutional neural network classification model. We first used Efficient-Net for multi-classification staging diagnosis, but the results showed that stage I/II of pneumoconiosis was difficult to diagnose. Therefore, based on clinical practice we continued to improve the model by using the Res-Net 34 Multi-stage joint method.
    RESULTS: Of the 498 cases collected, the classification model using the Efficient-Net achieved an accuracy of 83% with a Quadratic Weighted Kappa (QWK) score of 0.889. The classification model using the multi-stage joint approach of Res-Net 34 achieved an accuracy of 89% with an area under the curve (AUC) of 0.98 and a high QWK score of 0.94.
    CONCLUSIONS: In this study, the diagnostic accuracy of pneumoconiosis staging was significantly improved by an innovative combined multi-stage approach, which provided a reference for clinical application and pneumoconiosis screening.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:血管内动脉瘤修复术(EVAR)后的计算机断层扫描血管造影(CTA)图像的图像质量不令人满意,由于金属植入物造成的伪影阻碍了支架和隔离腔的清晰描绘,以及邻近的软组织。然而,由于更高的辐射剂量,目前减少这些伪影的技术仍需要进一步的进步,更长的处理时间等等。因此,这项研究的目的是评估利用单能量金属工件减少(SEMAR)以及一种新颖的深度学习图像重建技术的影响,被称为高级智能Clear-IQ引擎(AiCE),EVAR后CTA随访的图像质量。
    方法:这项回顾性研究包括47例患者(平均年龄±标准差:68.6±7.8岁;37例男性),他们在EVAR后接受了CTA检查。使用四种不同的方法重建图像:混合迭代重建(HIR),AICE,HIR和SEMAR的组合(HIR+SEMAR),以及AiCE和SEMAR的组合(AiCE+SEMAR)。两个放射科医生,对重建技术视而不见,独立评估图像。定量评估包括图像噪声的测量,信噪比(SNR),对比噪声比(CNR),工件的最长长度(AL),和工件索引(AI)。随后在不同的重建方法中比较这些参数。
    结果:主观结果表明,AiCE+SEMAR在图像质量方面表现最好。AiCE+SEMAR组的平均图像噪声强度(25.35±6.51HU)明显低于HIR组(47.77±8.76HU),AiCE(42.93±10.61HU),和HIR+SEMAR(30.34±4.87HU)组(p<0.001)。此外,AiCE+SEMAR展示了最高的SNR和CNR,以及最低的AIs和AL。重要的是,使用AiCE+SEMAR最清楚地观察到内漏和血栓。
    结论:与其他重建方法相比,AiCE+SEMAR的组合展示了卓越的图像质量,从而提高了潜在并发症的检测能力和诊断信心,例如EVAR后的早期小端漏和血栓。图像质量的这种改善可以导致更准确的诊断和更好的患者结果。
    BACKGROUND: The image quality of computed tomography angiography (CTA) images following endovascular aneurysm repair (EVAR) is not satisfactory, since artifacts resulting from metallic implants obstruct the clear depiction of stent and isolation lumens, and also adjacent soft tissues. However, current techniques to reduce these artifacts still need further advancements due to higher radiation doses, longer processing times and so on. Thus, the aim of this study is to assess the impact of utilizing Single-Energy Metal Artifact Reduction (SEMAR) alongside a novel deep learning image reconstruction technique, known as the Advanced Intelligent Clear-IQ Engine (AiCE), on image quality of CTA follow-ups conducted after EVAR.
    METHODS: This retrospective study included 47 patients (mean age ± standard deviation: 68.6 ± 7.8 years; 37 males) who underwent CTA examinations following EVAR. Images were reconstructed using four different methods: hybrid iterative reconstruction (HIR), AiCE, the combination of HIR and SEMAR (HIR + SEMAR), and the combination of AiCE and SEMAR (AiCE + SEMAR). Two radiologists, blinded to the reconstruction techniques, independently evaluated the images. Quantitative assessments included measurements of image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), the longest length of artifacts (AL), and artifact index (AI). These parameters were subsequently compared across different reconstruction methods.
    RESULTS: The subjective results indicated that AiCE + SEMAR performed the best in terms of image quality. The mean image noise intensity was significantly lower in the AiCE + SEMAR group (25.35 ± 6.51 HU) than in the HIR (47.77 ± 8.76 HU), AiCE (42.93 ± 10.61 HU), and HIR + SEMAR (30.34 ± 4.87 HU) groups (p < 0.001). Additionally, AiCE + SEMAR exhibited the highest SNRs and CNRs, as well as the lowest AIs and ALs. Importantly, endoleaks and thrombi were most clearly visualized using AiCE + SEMAR.
    CONCLUSIONS: In comparison to other reconstruction methods, the combination of AiCE + SEMAR demonstrates superior image quality, thereby enhancing the detection capabilities and diagnostic confidence of potential complications such as early minor endleaks and thrombi following EVAR. This improvement in image quality could lead to more accurate diagnoses and better patient outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: English Abstract
    Automatic detection of pulmonary nodule based on computer tomography (CT) images can significantly improve the diagnosis and treatment of lung cancer. However, there is a lack of effective interactive tools to record the marked results of radiologists in real time and feed them back to the algorithm model for iterative optimization. This paper designed and developed an online interactive review system supporting the assisted diagnosis of lung nodules in CT images. Lung nodules were detected by the preset model and presented to doctors, who marked or corrected the lung nodules detected by the system with their professional knowledge, and then iteratively optimized the AI model with active learning strategy according to the marked results of radiologists to continuously improve the accuracy of the model. The subset 5-9 dataset of the lung nodule analysis 2016(LUNA16) was used for iteration experiments. The precision, F1-score and MioU indexes were steadily improved with the increase of the number of iterations, and the precision increased from 0.213 9 to 0.565 6. The results in this paper show that the system not only uses deep segmentation model to assist radiologists, but also optimizes the model by using radiologists\' feedback information to the maximum extent, iteratively improving the accuracy of the model and better assisting radiologists.
    基于电子计算机断层扫描(CT)影像的肺结节自动检测可以有效辅助肺癌诊治,但当前缺乏有效的交互工具将放射科医生的判读结果实时记录并反馈,以优化后台算法模型。本文设计并研发了一个支持CT图像肺结节辅助诊断的在线交互审查系统,通过预置模型检测出肺结节展示给医生,医生利用专业知识对检测的肺结节进行标注,然后根据标注结果采用主动学习策略对内置模型进行迭代优化,以持续提高模型的准确性。本文以开源肺结节数据集——肺结节分析2016(LUNA16)的5~9号子集进行迭代实验,随着迭代次数的增加,模型的准确率、调和分数和交并比指标稳定提升,准确率从0.213 9提高至0.565 6。本文研究结果表明,该系统能在使用深度分割模型辅助医生诊断的同时,最大程度地利用医生的反馈信息来优化模型,迭代提高模型的准确性,从而更好地辅助医生工作。.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:评估腹部双能CT(DECT)中通过深度学习图像重建(DLIR)实现的较薄切片碘图的图像质量和诊断接受度的改善。
    方法:本研究前瞻性纳入104名受试者,136个病灶。基于对比增强腹部DECT的门静脉扫描生成了四个系列的碘图:5毫米和1.25毫米,使用自适应统计迭代重建-V(Asir-V)和50%混合(AV-50),和1.25毫米使用DLIR与介质(DLIR-M),和高强度(DLIR-H)。测量了9个解剖部位的碘浓度(IC)及其标准偏差,并计算相应的变异系数(CV)。测量噪声功率谱(NPS)和边缘上升斜率(ERS)。五位放射科医生根据图像噪声对图像质量进行了评级,对比,清晰度,纹理,结构能见度小,并评估图像和病变显著性的总体诊断可接受性。
    结果:四次重建维持了9个解剖部位的IC值不变(所有p>0.999)。与1.25mmAV-50相比,1.25mmDLIR-M和DLIR-H显着降低了CV值(所有p<0.001),并呈现较低的噪声和噪声峰值(均p<0.001)。与5-mmAV-50相比,1.25-mm图像具有更高的ERS(所有p<0.001)。四个重建中的峰值和平均空间频率的差异相对较小,但具有统计学意义(均p<0.001)。1.25mmDLIR-M图像的诊断可接受性和病变显著性评价高于5mm和1.25mmAV-50图像(均P<0.001)。
    结论:DLIR可以促进腹部DECT中切片厚度较薄的碘图,以改善图像质量,诊断可接受性,和病变明显。
    BACKGROUND: To assess the improvement of image quality and diagnostic acceptance of thinner slice iodine maps enabled by deep learning image reconstruction (DLIR) in abdominal dual-energy CT (DECT).
    METHODS: This study prospectively included 104 participants with 136 lesions. Four series of iodine maps were generated based on portal-venous scans of contrast-enhanced abdominal DECT: 5-mm and 1.25-mm using adaptive statistical iterative reconstruction-V (Asir-V) with 50% blending (AV-50), and 1.25-mm using DLIR with medium (DLIR-M), and high strength (DLIR-H). The iodine concentrations (IC) and their standard deviations of nine anatomical sites were measured, and the corresponding coefficient of variations (CV) were calculated. Noise-power-spectrum (NPS) and edge-rise-slope (ERS) were measured. Five radiologists rated image quality in terms of image noise, contrast, sharpness, texture, and small structure visibility, and evaluated overall diagnostic acceptability of images and lesion conspicuity.
    RESULTS: The four reconstructions maintained the IC values unchanged in nine anatomical sites (all p > 0.999). Compared to 1.25-mm AV-50, 1.25-mm DLIR-M and DLIR-H significantly reduced CV values (all p < 0.001) and presented lower noise and noise peak (both p < 0.001). Compared to 5-mm AV-50, 1.25-mm images had higher ERS (all p < 0.001). The difference of the peak and average spatial frequency among the four reconstructions was relatively small but statistically significant (both p < 0.001). The 1.25-mm DLIR-M images were rated higher than the 5-mm and 1.25-mm AV-50 images for diagnostic acceptability and lesion conspicuity (all P < 0.001).
    CONCLUSIONS: DLIR may facilitate the thinner slice thickness iodine maps in abdominal DECT for improvement of image quality, diagnostic acceptability, and lesion conspicuity.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    深度学习图像重建(DLIR)算法采用卷积神经网络(CNN)进行CT图像重建,以产生噪声水平非常低的CT图像。即使是在低辐射剂量下.这项研究的目的是评估DLIR算法是否降低了CT有效剂量(ED),并改善了CT图像质量与过滤反投影(FBP)和迭代重建(IR)算法在重症监护病房(ICU)患者。我们使用DLIR和随后的FBP或IR算法(基于高级模型迭代重建[ADMIRE]模型的算法或自适应迭代剂量减少3D[AIDR3D]混合算法)在30天的时间段内至少进行了两次连续的胸部和/或腹部对比增强CT扫描,以进行CT图像重建。辐射ED,噪声级,比较了不同CT扫描仪之间的信噪比(SNR)。非参数Wilcoxon检验用于统计学比较。统计学显著性设定为p<0.05。共有83名患者(平均年龄,59±15年[标准偏差];56名男性)被包括在内。DLIRvs.FBP降低了ED(18.45±13.16mSvvs.22.06±9.55mSv,p<0.05),而DLIRvs.FBP和vs.ADMIRE和AIDR3DIR算法降低了图像噪声(8.45±3.24vs.14.85±2.73vs.14.77±32.77和11.17±32.77,p<0.05),并增加了SNR(11.53±9.28vs.3.99±1.23vs.5.84±2.74和3.58±2.74,p<0.05)。尽管维持降低的ED,但与在ICU患者中使用FBP或IR算法的CT扫描仪相比,使用DLIR的CT扫描仪改善了SNR。
    Deep learning image reconstruction (DLIR) algorithms employ convolutional neural networks (CNNs) for CT image reconstruction to produce CT images with a very low noise level, even at a low radiation dose. The aim of this study was to assess whether the DLIR algorithm reduces the CT effective dose (ED) and improves CT image quality in comparison with filtered back projection (FBP) and iterative reconstruction (IR) algorithms in intensive care unit (ICU) patients. We identified all consecutive patients referred to the ICU of a single hospital who underwent at least two consecutive chest and/or abdominal contrast-enhanced CT scans within a time period of 30 days using DLIR and subsequently the FBP or IR algorithm (Advanced Modeled Iterative Reconstruction [ADMIRE] model-based algorithm or Adaptive Iterative Dose Reduction 3D [AIDR 3D] hybrid algorithm) for CT image reconstruction. The radiation ED, noise level, and signal-to-noise ratio (SNR) were compared between the different CT scanners. The non-parametric Wilcoxon test was used for statistical comparison. Statistical significance was set at p < 0.05. A total of 83 patients (mean age, 59 ± 15 years [standard deviation]; 56 men) were included. DLIR vs. FBP reduced the ED (18.45 ± 13.16 mSv vs. 22.06 ± 9.55 mSv, p < 0.05), while DLIR vs. FBP and vs. ADMIRE and AIDR 3D IR algorithms reduced image noise (8.45 ± 3.24 vs. 14.85 ± 2.73 vs. 14.77 ± 32.77 and 11.17 ± 32.77, p < 0.05) and increased the SNR (11.53 ± 9.28 vs. 3.99 ± 1.23 vs. 5.84 ± 2.74 and 3.58 ± 2.74, p < 0.05). CT scanners employing DLIR improved the SNR compared to CT scanners using FBP or IR algorithms in ICU patients despite maintaining a reduced ED.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    计算机辅助诊断系统在乳腺癌的诊断和早期检测中起着至关重要的作用。然而,目前大多数方法主要集中在单乳房的双视图分析,从而忽略了双侧乳房X线照片之间潜在的有价值的信息。在本文中,我们提出了一种四视图相关和对比联合学习网络(FV-Net),用于双侧乳房X线照片图像的分类。具体来说,FV-Net专注于在双侧乳房X线照片的四个视图中提取和匹配特征,同时最大化它们的相似性和差异性。通过跨乳房X线双途径注意模块,实现了双侧乳房X线照片视图之间的特征匹配,捕获乳房X线照片的一致性和互补特征,并有效减少特征错位。在来自双侧乳房X线照片的重组特征图中,双侧乳房X线对比联合学习模块对每个局部区域内的阳性和阴性样本对进行关联对比学习。这旨在最大化相似局部特征之间的相关性,并增强双侧乳房X线照片表示中不同特征之间的区别。我们在包含20%的Mini-DDSM和Vindr-mamo组合数据集的测试集上的实验结果,以及在INbast数据集上,表明,与竞争方法相比,我们的模型在乳腺癌分类中表现出优异的性能。
    Computer-aided diagnosis systems play a crucial role in the diagnosis and early detection of breast cancer. However, most current methods focus primarily on the dual-view analysis of a single breast, thereby neglecting the potentially valuable information between bilateral mammograms. In this paper, we propose a Four-View Correlation and Contrastive Joint Learning Network (FV-Net) for the classification of bilateral mammogram images. Specifically, FV-Net focuses on extracting and matching features across the four views of bilateral mammograms while maximizing both their similarities and dissimilarities. Through the Cross-Mammogram Dual-Pathway Attention Module, feature matching between bilateral mammogram views is achieved, capturing the consistency and complementary features across mammograms and effectively reducing feature misalignment. In the reconstituted feature maps derived from bilateral mammograms, the Bilateral-Mammogram Contrastive Joint Learning module performs associative contrastive learning on positive and negative sample pairs within each local region. This aims to maximize the correlation between similar local features and enhance the differentiation between dissimilar features across the bilateral mammogram representations. Our experimental results on a test set comprising 20% of the combined Mini-DDSM and Vindr-mamo datasets, as well as on the INbreast dataset, show that our model exhibits superior performance in breast cancer classification compared to competing methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:肺腔病变是由多种恶性和非恶性疾病引起的肺部常见病变之一。腔病变的诊断通常基于对典型形态特征的准确识别。基于深度学习的模型来自动检测,段,并量化CT扫描上的空腔病变区域在临床诊断中具有潜力,监测,和治疗效果评估。
    方法:本文提出了一种名为CSA2-ResNet的基于弱监督深度学习的方法来定量表征空腔病变。首先使用预训练的2D分割模型对肺实质进行分割,然后将有或没有空腔损伤的输出输入包含混合注意力模块的开发的深度神经网络。接下来,可视化病变是使用梯度加权类激活映射从分类网络的激活区域生成的,并应用图像处理进行后处理以获得预期的空腔病变分割结果。最后,空腔病变的自动特征测量(例如,面积和厚度)进行了开发和验证。
    结果:提出的弱监督分割方法获得了准确性,精度,特异性,召回,F1得分为98.48%,96.80%,97.20%,100%,98.36%,分别。与其他方法相比有显著的改善(P<0.05)。形貌的定量表征也获得了良好的分析效果。
    结论:提出的易于训练和高性能的深度学习模型为临床上肺腔病变的诊断和动态监测提供了一种快速有效的方法。临床和转化影响声明:该模型使用人工智能来实现CT扫描中肺腔病变的检测和定量分析。实验中揭示的形态学特征可以作为诊断和动态监测空腔病变患者的潜在指标。
    OBJECTIVE: Pulmonary cavity lesion is one of the commonly seen lesions in lung caused by a variety of malignant and non-malignant diseases. Diagnosis of a cavity lesion is commonly based on accurate recognition of the typical morphological characteristics. A deep learning-based model to automatically detect, segment, and quantify the region of cavity lesion on CT scans has potential in clinical diagnosis, monitoring, and treatment efficacy assessment.
    METHODS: A weakly-supervised deep learning-based method named CSA2-ResNet was proposed to quantitatively characterize cavity lesions in this paper. The lung parenchyma was firstly segmented using a pretrained 2D segmentation model, and then the output with or without cavity lesions was fed into the developed deep neural network containing hybrid attention modules. Next, the visualized lesion was generated from the activation region of the classification network using gradient-weighted class activation mapping, and image processing was applied for post-processing to obtain the expected segmentation results of cavity lesions. Finally, the automatic characteristic measurement of cavity lesions (e.g., area and thickness) was developed and verified.
    RESULTS: the proposed weakly-supervised segmentation method achieved an accuracy, precision, specificity, recall, and F1-score of 98.48%, 96.80%, 97.20%, 100%, and 98.36%, respectively. There is a significant improvement (P < 0.05) compared to other methods. Quantitative characterization of morphology also obtained good analysis effects.
    CONCLUSIONS: The proposed easily-trained and high-performance deep learning model provides a fast and effective way for the diagnosis and dynamic monitoring of pulmonary cavity lesions in clinic. Clinical and Translational Impact Statement: This model used artificial intelligence to achieve the detection and quantitative analysis of pulmonary cavity lesions in CT scans. The morphological features revealed in experiments can be utilized as potential indicators for diagnosis and dynamic monitoring of patients with cavity lesions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:腹部CT扫描对于诊断腹部疾病至关重要,但在组织分析和软组织检测方面存在局限性。双能CT(DECT)可以通过提供低keV虚拟单能量图像(VMI)来改善这些问题,增强病变检测和组织表征。然而,其成本限制了广泛使用。
    目的:开发一种模型,该模型将常规图像(CI)转换为上腹部CT扫描的40keV(Gen-VMI40keV)的生成虚拟单能量图像。
    方法:共有444例接受上腹部能谱对比增强CT检查的患者被纳入训练和验证数据集(7:3)。然后,40-keV门静脉虚拟单能(VMI40keV)和CI,由谱CT扫描产生,用作目标图像和源图像。这些图像用于构建和训练aCI-VMI40keV模型。平均绝对误差(MAE)等指标,峰值信噪比(PSNR),和结构相似性(SSIM)被用来确定最佳发电机模式。另外198例患者分为三个试验组,包括第1组(58例可见异常),第2组(40例肝细胞癌[HCC])和第3组(100例来自公开的HCC数据集)。进行了主观和客观评价。比较,进行了相关分析和Bland-Altman图分析。
    结果:第192次迭代产生了最佳的发电机模式(较低的MAE和最高的PSNR和SSIM)。在测试组(1和2)中,VMI40keV和Gen-VMI40keV都显著提高了CT值,以及SNR和CNR,与CI相比,所有器官。在各种器官和病变中,Gen-VMI40keV与VMI40keV之间的客观指标呈显着正相关。Bland-Altman分析显示,两种成像类型之间的差异大多落在95%的置信区间内。第1组和第2组中Gen-VMI40keV和VMI40keV客观得分的Pearson和Spearman相关系数在0.645至0.980之间。在第3组中,Gen-VMI40keV对HCC的CT值明显更高(220.5HU与109.1HU)和肝脏(220.0HUvs.112.8HU)与CI相比(p<0.01)。在Gen-VMI40keV中,HCC/肝脏的CNR也显着较高(2.0vs.1.2)比inCI(p<0.01)。此外,主观评估Gen-VMI40keV与CI相比具有更高的图像质量。
    结论:CI-VMI40keV模型可以从常规CT扫描生成Gen-VMI40keV,非常类似于VMI40keV。
    BACKGROUND: Abdominal CT scans are vital for diagnosing abdominal diseases but have limitations in tissue analysis and soft tissue detection. Dual-energy CT (DECT) can improve these issues by offering low keV virtual monoenergetic images (VMI), enhancing lesion detection and tissue characterization. However, its cost limits widespread use.
    OBJECTIVE: To develop a model that converts conventional images (CI) into generative virtual monoenergetic images at 40 keV (Gen-VMI40keV) of the upper abdomen CT scan.
    METHODS: Totally 444 patients who underwent upper abdominal spectral contrast-enhanced CT were enrolled and assigned to the training and validation datasets (7:3). Then, 40-keV portal-vein virtual monoenergetic (VMI40keV) and CI, generated from spectral CT scans, served as target and source images. These images were employed to build and train a CI-VMI40keV model. Indexes such as Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) were utilized to determine the best generator mode. An additional 198 cases were divided into three test groups, including Group 1 (58 cases with visible abnormalities), Group 2 (40 cases with hepatocellular carcinoma [HCC]) and Group 3 (100 cases from a publicly available HCC dataset). Both subjective and objective evaluations were performed. Comparisons, correlation analyses and Bland-Altman plot analyses were performed.
    RESULTS: The 192nd iteration produced the best generator mode (lower MAE and highest PSNR and SSIM). In the Test groups (1 and 2), both VMI40keV and Gen-VMI40keV significantly improved CT values, as well as SNR and CNR, for all organs compared to CI. Significant positive correlations for objective indexes were found between Gen-VMI40keV and VMI40keV in various organs and lesions. Bland-Altman analysis showed that the differences between both imaging types mostly fell within the 95% confidence interval. Pearson\'s and Spearman\'s correlation coefficients for objective scores between Gen-VMI40keV and VMI40keV in Groups 1 and 2 ranged from 0.645 to 0.980. In Group 3, Gen-VMI40keV yielded significantly higher CT values for HCC (220.5HU vs. 109.1HU) and liver (220.0HU vs. 112.8HU) compared to CI (p < 0.01). The CNR for HCC/liver was also significantly higher in Gen-VMI40keV (2.0 vs. 1.2) than in CI (p < 0.01). Additionally, Gen-VMI40keV was subjectively evaluated to have a higher image quality compared to CI.
    CONCLUSIONS: CI-VMI40keV model can generate Gen-VMI40keV from conventional CT scan, closely resembling VMI40keV.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    从计算机断层扫描(CT)扫描中精确分割肝脏肿瘤是各种临床应用中的先决条件。多期CT成像增强肿瘤定性,从而帮助放射科医生准确识别。然而,现有的自动肝肿瘤分割模型没有充分利用多阶段信息,缺乏捕获全局信息的能力。在这项研究中,我们开发了一种开创性的多相特征交互变压器网络(MI-TransSeg),用于在对比增强CT图像中进行准确的肝肿瘤分割和随后的微血管侵犯(MVI)评估。在拟议的网络中,引入了高效的多阶段特征交互模块,实现了多阶段之间的双向特征交互,从而最大限度地利用可用的多相信息。为了增强模型提取全局信息的能力,设计了一种基于层次变换器的编码器和解码器体系结构。重要的是,我们设计了一种多分辨率尺度特征聚合策略(MSFA)来优化所提出模型的参数和性能。在分割之后,通过MI-TransSeg生成的肝肿瘤面罩用于提取影像组学特征,以用于MVI评估的临床应用.经机构审查委员会(IRB)批准,我们收集了临床多期对比增强CT腹部数据集,其中包括164例肝肿瘤患者.实验结果表明,所提出的MI-TransSeg优于各种最先进的方法。此外,我们发现我们的方法预测的肿瘤面罩在评估微血管侵犯方面显示出有希望的潜力.总之,MI-TransSeg为复杂肝肿瘤的分割提供了创新的范例,从而强调了多期CT数据开发的重要性。所提出的MI-TransSeg网络具有协助放射科医师诊断肝肿瘤和评估微血管侵犯的潜力。
    Precise segmentation of liver tumors from computed tomography (CT) scans is a prerequisite step in various clinical applications. Multi-phase CT imaging enhances tumor characterization, thereby assisting radiologists in accurate identification. However, existing automatic liver tumor segmentation models did not fully exploit multi-phase information and lacked the capability to capture global information. In this study, we developed a pioneering multi-phase feature interaction Transformer network (MI-TransSeg) for accurate liver tumor segmentation and a subsequent microvascular invasion (MVI) assessment in contrast-enhanced CT images. In the proposed network, an efficient multi-phase features interaction module was introduced to enable bi-directional feature interaction among multiple phases, thus maximally exploiting the available multi-phase information. To enhance the model\'s capability to extract global information, a hierarchical transformer-based encoder and decoder architecture was designed. Importantly, we devised a multi-resolution scales feature aggregation strategy (MSFA) to optimize the parameters and performance of the proposed model. Subsequent to segmentation, the liver tumor masks generated by MI-TransSeg were applied to extract radiomic features for the clinical applications of the MVI assessment. With Institutional Review Board (IRB) approval, a clinical multi-phase contrast-enhanced CT abdominal dataset was collected that included 164 patients with liver tumors. The experimental results demonstrated that the proposed MI-TransSeg was superior to various state-of-the-art methods. Additionally, we found that the tumor mask predicted by our method showed promising potential in the assessment of microvascular invasion. In conclusion, MI-TransSeg presents an innovative paradigm for the segmentation of complex liver tumors, thus underscoring the significance of multi-phase CT data exploitation. The proposed MI-TransSeg network has the potential to assist radiologists in diagnosing liver tumors and assessing microvascular invasion.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号