Medical image processing

医学图像处理
  • 文章类型: Journal Article
    冠状动脉疾病仍然是心血管疾病患者死亡的主要原因。通过支架植入的生物可吸收血管支架(BVS)的治疗用途很常见,然而,目前来自血管内光学相干断层扫描(IVOCT)图像的BVS分割技术的有效性不足.
    本文介绍了一种增强的分割方法,该方法使用一种新颖的基于小波的U形网络来解决这些挑战。我们开发了一种基于小波的U形网络,该网络结合了注意力门(AG)和Atrous多尺度场模块(AMFM),旨在通过改善支架支柱与周围组织之间的差异来提高分割精度。独特的小波融合模块缓解了不同特征图分支之间的语义差距,促进更有效的功能集成。
    大量实验表明,我们的模型在关键指标如Dice系数方面超越了现有技术,准确度,灵敏度,和联合交汇处(IoU),得分达到85.10%,99.77%,86.93%,73.81%,分别。AG的整合,AMFM,融合模块在实现这些结果方面发挥了关键作用,指示在捕获详细的上下文信息方面的显著增强。
    基于小波的U形网络的引入标志着IVOCT图像中BVSs分割的实质性改进,提示冠心病治疗临床实践的潜在益处。这种方法也可能适用于其他复杂的医学成像分割任务,表明未来研究的广阔范围。
    UNASSIGNED: Coronary artery disease remains a leading cause of mortality among individuals with cardiovascular conditions. The therapeutic use of bioresorbable vascular scaffolds (BVSs) through stent implantation is common, yet the effectiveness of current BVS segmentation techniques from Intravascular Optical Coherence Tomography (IVOCT) images is inadequate.
    UNASSIGNED: This paper introduces an enhanced segmentation approach using a novel Wavelet-based U-shape network to address these challenges. We developed a Wavelet-based U-shape network that incorporates an Attention Gate (AG) and an Atrous Multi-scale Field Module (AMFM), designed to enhance the segmentation accuracy by improving the differentiation between the stent struts and the surrounding tissue. A unique wavelet fusion module mitigates the semantic gaps between different feature map branches, facilitating more effective feature integration.
    UNASSIGNED: Extensive experiments demonstrate that our model surpasses existing techniques in key metrics such as Dice coefficient, accuracy, sensitivity, and Intersection over Union (IoU), achieving scores of 85.10%, 99.77%, 86.93%, and 73.81%, respectively. The integration of AG, AMFM, and the fusion module played a crucial role in achieving these outcomes, indicating a significant enhancement in capturing detailed contextual information.
    UNASSIGNED: The introduction of the Wavelet-based U-shape network marks a substantial improvement in the segmentation of BVSs in IVOCT images, suggesting potential benefits for clinical practices in coronary artery disease treatment. This approach may also be applicable to other intricate medical imaging segmentation tasks, indicating a broad scope for future research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    近年来,手术机器人在微创外科领域的应用发展迅速,受到越来越多的研究关注。人们已经达成共识,即外科手术将减少创伤,并实施更多的智慧和更高的自主性,这是机器人系统环境感知能力面临的严峻挑战。机器人环境信息的主要来源之一是图像,这是机器人视觉的基础。在这篇评论文章中,根据信息获取的对象将临床图像分为直接图像和间接图像,并成为连续的,间歇连续,并且根据目标跟踪频率不连续。基于这两个维度介绍了现有手术机器人在各个范畴的特点和应用。我们进行这次审查的目的是分析,总结,并讨论当前关于医学应用图像技术的一般规则的证据。我们的分析提供了见解,并为将来开发更先进的手术机器人系统提供了指导。
    Surgical robotics application in the field of minimally invasive surgery has developed rapidly and has been attracting increasingly more research attention in recent years. A common consensus has been reached that surgical procedures are to become less traumatic and with the implementation of more intelligence and higher autonomy, which is a serious challenge faced by the environmental sensing capabilities of robotic systems. One of the main sources of environmental information for robots are images, which are the basis of robot vision. In this review article, we divide clinical image into direct and indirect based on the object of information acquisition, and into continuous, intermittent continuous, and discontinuous according to the target-tracking frequency. The characteristics and applications of the existing surgical robots in each category are introduced based on these two dimensions. Our purpose in conducting this review was to analyze, summarize, and discuss the current evidence on the general rules on the application of image technologies for medical purposes. Our analysis gives insight and provides guidance conducive to the development of more advanced surgical robotics systems in the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    高光谱成像已经证明了其通过非接触和非侵入性技术提供样本的相关空间和光谱信息的潜力。在医学领域,尤其是在组织病理学方面,HSI已用于病变组织的分类和鉴定以及其形态特性的表征。在这项工作中,我们提出了一种混合方案,通过高光谱成像对非肿瘤和肿瘤组织学脑样本进行分类。所提出的方法基于通过线性解混识别高光谱图像中的特征成分,作为一个特征工程步骤,并通过深度学习方法进行后续分类。这最后一步,通过增强数据集上的交叉验证方案和迁移学习方案来评估深度神经网络的集合。所提出的方法可以对组织学脑样本进行分类,平均准确率为88%,减少可变性,计算成本,和推理时间,这与最先进的方法相比具有优势。因此,这项工作证明了混合分类方法通过结合用于特征提取的线性分解和用于分类的深度学习来实现稳健和可靠的结果的潜力。
    Hyperspectral imaging has demonstrated its potential to provide correlated spatial and spectral information of a sample by a non-contact and non-invasive technology. In the medical field, especially in histopathology, HSI has been applied for the classification and identification of diseased tissue and for the characterization of its morphological properties. In this work, we propose a hybrid scheme to classify non-tumor and tumor histological brain samples by hyperspectral imaging. The proposed approach is based on the identification of characteristic components in a hyperspectral image by linear unmixing, as a features engineering step, and the subsequent classification by a deep learning approach. For this last step, an ensemble of deep neural networks is evaluated by a cross-validation scheme on an augmented dataset and a transfer learning scheme. The proposed method can classify histological brain samples with an average accuracy of 88%, and reduced variability, computational cost, and inference times, which presents an advantage over methods in the state-of-the-art. Hence, the work demonstrates the potential of hybrid classification methodologies to achieve robust and reliable results by combining linear unmixing for features extraction and deep learning for classification.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    脑肿瘤是由于异常细胞组织的扩张而发生的,可以是恶性的(癌性的)或良性的(非癌性的)。位置等众多因素,尺寸,在检测和诊断脑肿瘤时考虑进展率。在初始阶段检测脑肿瘤对于MRI(磁共振成像)扫描起着重要作用的诊断至关重要。多年来,深度学习模型已被广泛用于医学图像处理。目前的研究主要调查了新颖的微调视觉变换器模型(FTVT)-FTVT-b16,FTVT-b32,FTVT-l16,FTVT-l32-用于脑肿瘤分类,同时还将它们与其他已建立的深度学习模型进行比较,例如ResNet50、MobileNet-V2和EfficientNet-B0。包含7,023张图像(MRI扫描)的数据集分为四个不同的类别,即,神经胶质瘤,脑膜瘤,垂体,并且没有肿瘤用于分类。Further,该研究对这些模型进行了比较分析,包括它们的准确性和其他评估指标,包括召回,精度,每个班级的F1得分。深度学习模型ResNet-50、EfficientNet-B0和MobileNet-V2的准确率为96.5%,95.1%,94.9%,分别。在所有的FTVT模型中,FTVT-l16模型取得了98.70%的显著精度,而其他FTVT-b16、FTVT-b32和FTVT-132模型取得了98.09%的精度,96.87%,98.62%,分别,从而证明了FTVT在医学图像处理中的有效性和鲁棒性。
    Brain tumors occur due to the expansion of abnormal cell tissues and can be malignant (cancerous) or benign (not cancerous). Numerous factors such as the position, size, and progression rate are considered while detecting and diagnosing brain tumors. Detecting brain tumors in their initial phases is vital for diagnosis where MRI (magnetic resonance imaging) scans play an important role. Over the years, deep learning models have been extensively used for medical image processing. The current study primarily investigates the novel Fine-Tuned Vision Transformer models (FTVTs)-FTVT-b16, FTVT-b32, FTVT-l16, FTVT-l32-for brain tumor classification, while also comparing them with other established deep learning models such as ResNet50, MobileNet-V2, and EfficientNet - B0. A dataset with 7,023 images (MRI scans) categorized into four different classes, namely, glioma, meningioma, pituitary, and no tumor are used for classification. Further, the study presents a comparative analysis of these models including their accuracies and other evaluation metrics including recall, precision, and F1-score across each class. The deep learning models ResNet-50, EfficientNet-B0, and MobileNet-V2 obtained an accuracy of 96.5%, 95.1%, and 94.9%, respectively. Among all the FTVT models, FTVT-l16 model achieved a remarkable accuracy of 98.70% whereas other FTVT models FTVT-b16, FTVT-b32, and FTVT-132 achieved an accuracy of 98.09%, 96.87%, 98.62%, respectively, hence proving the efficacy and robustness of FTVT\'s in medical image processing.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    将自动分割方法结合到牙科X射线图像中,通过促进细致,完善了临床诊断和治疗计划的范例,牙齿结构和邻近组织的像素级关节。这是早期病理检测和细致的疾病进展监测的支柱。尽管如此,由于X射线成像的内在局限性,传统的分割框架经常会遇到重大挫折,包括受损的图像保真度,结构边界的模糊划定,以及牙髓等牙齿成分的复杂解剖结构,搪瓷,还有牙本质.为了克服这些障碍,我们提出了可变形卷积和Mamba集成网络,创新的2D牙科X射线图像分割架构,合并了一个合并结构可变形编码器,认知优化的语义增强模块,和分层收敛解码器。总的来说,这些组件支持多尺度全球功能的管理,加强特征表示的稳定性,并完善特征向量的合并。对14个基线的比较评估强调了其有效性,记录骰子系数增加了0.95%,第95个百分位数Hausdorff距离减少到7.494。
    The incorporation of automatic segmentation methodologies into dental X-ray images refined the paradigms of clinical diagnostics and therapeutic planning by facilitating meticulous, pixel-level articulation of both dental structures and proximate tissues. This underpins the pillars of early pathological detection and meticulous disease progression monitoring. Nonetheless, conventional segmentation frameworks often encounter significant setbacks attributable to the intrinsic limitations of X-ray imaging, including compromised image fidelity, obscured delineation of structural boundaries, and the intricate anatomical structures of dental constituents such as pulp, enamel, and dentin. To surmount these impediments, we propose the Deformable Convolution and Mamba Integration Network, an innovative 2D dental X-ray image segmentation architecture, which amalgamates a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder. Collectively, these components bolster the management of multi-scale global features, fortify the stability of feature representation, and refine the amalgamation of feature vectors. A comparative assessment against 14 baselines underscores its efficacy, registering a 0.95% enhancement in the Dice Coefficient and a diminution of the 95th percentile Hausdorff Distance to 7.494.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在受设备限制的临床条件下,实现轻量级的皮肤病变分割至关重要,因为它有助于将模型集成到各种医疗设备中,从而提高运营效率。然而,模型的轻量化设计可能面临精度下降,特别是当处理复杂的图像,如皮肤病变图像与不规则区域,模糊的边界,和超大的边界。为了应对这些挑战,我们提出了一个有效的轻量级注意网络(ELANet)用于皮肤病变分割任务。在ELANet,两种不同的注意机制的双边残差模块(BRM)可以实现信息互补,这增强了对空间和通道维度特征的敏感性,分别,然后将多个BRM堆叠起来,对输入信息进行有效的特征提取。此外,该网络通过多尺度注意力融合(MAF)操作放置不同尺度的特征图来获取全局信息并提高分割精度。最后,我们评估了ELANet在三个公开可用数据集上的性能,ISIC2016、ISIC2017和ISIC2018,实验结果表明,我们的算法可以达到89.87%,81.85%,三个参数为0.459M的数据集上的mIoU的82.87%,这是一个很好的平衡之间的准确性和亮度,是优于许多现有的分割方法。
    In clinical conditions limited by equipment, attaining lightweight skin lesion segmentation is pivotal as it facilitates the integration of the model into diverse medical devices, thereby enhancing operational efficiency. However, the lightweight design of the model may face accuracy degradation, especially when dealing with complex images such as skin lesion images with irregular regions, blurred boundaries, and oversized boundaries. To address these challenges, we propose an efficient lightweight attention network (ELANet) for the skin lesion segmentation task. In ELANet, two different attention mechanisms of the bilateral residual module (BRM) can achieve complementary information, which enhances the sensitivity to features in spatial and channel dimensions, respectively, and then multiple BRMs are stacked for efficient feature extraction of the input information. In addition, the network acquires global information and improves segmentation accuracy by putting feature maps of different scales through multi-scale attention fusion (MAF) operations. Finally, we evaluate the performance of ELANet on three publicly available datasets, ISIC2016, ISIC2017, and ISIC2018, and the experimental results show that our algorithm can achieve 89.87%, 81.85%, and 82.87% of the mIoU on the three datasets with a parametric of 0.459 M, which is an excellent balance between accuracy and lightness and is superior to many existing segmentation methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    乳腺密度的评估,乳腺癌风险的关键指标,传统上由放射科医生通过乳房X线照相术图像的视觉检查来执行,利用乳腺成像报告和数据系统(BI-RADS)乳腺密度类别。然而,这种方法在观察者之间存在很大的可变性,导致密度评估和后续风险估计的不一致和潜在的不准确。为了解决这个问题,我们提出了一种基于深度学习的自动检测算法(DLAD),旨在自动评估乳腺密度。我们的多中心,多读者研究利用了来自三个机构的122个全视野数字乳房X线摄影研究的不同数据集(CC和MLO投影中的488张图像)。我们邀请了两位经验丰富的放射科医师进行回顾性分析,为72项乳房X线照相术研究(BI-RADSA类:18,BI-RADSB类:43,BI-RADSC类:7,BI-RADSD类:4)。然后将DLAD的功效与具有不同经验水平的五名独立放射科医师的表现进行比较。DLAD显示出强大的性能,达到0.819的准确度(95%CI:0.736-0.903),F1得分为0.798(0.594-0.905),精度为0.806(0.596-0.896),召回0.830(0.650-0.946),科恩的卡帕(κ)为0.708(0.562-0.841)。该算法实现了匹配的稳健性能,并且在四种情况下超过了单个放射科医生的稳健性能。统计分析并没有发现DLAD和放射科医师之间的准确性存在显着差异。强调该模型与专业放射科医生评估的竞争性诊断一致性。这些结果表明,基于深度学习的自动检测算法可以提高乳腺密度评估的准确性和一致性,为改善乳腺癌筛查结果提供了可靠的工具。
    The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver variability, leading to inconsistencies and potential inaccuracies in density assessment and subsequent risk estimations. To address this, we present a deep learning-based automatic detection algorithm (DLAD) designed for the automated evaluation of breast density. Our multicentric, multi-reader study leverages a diverse dataset of 122 full-field digital mammography studies (488 images in CC and MLO projections) sourced from three institutions. We invited two experienced radiologists to conduct a retrospective analysis, establishing a ground truth for 72 mammography studies (BI-RADS class A: 18, BI-RADS class B: 43, BI-RADS class C: 7, BI-RADS class D: 4). The efficacy of the DLAD was then compared to the performance of five independent radiologists with varying levels of experience. The DLAD showed robust performance, achieving an accuracy of 0.819 (95% CI: 0.736-0.903), along with an F1 score of 0.798 (0.594-0.905), precision of 0.806 (0.596-0.896), recall of 0.830 (0.650-0.946), and a Cohen\'s Kappa (κ) of 0.708 (0.562-0.841). The algorithm achieved robust performance that matches and in four cases exceeds that of individual radiologists. The statistical analysis did not reveal a significant difference in accuracy between DLAD and the radiologists, underscoring the model\'s competitive diagnostic alignment with professional radiologist assessments. These results demonstrate that the deep learning-based automatic detection algorithm can enhance the accuracy and consistency of breast density assessments, offering a reliable tool for improving breast cancer screening outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    从患者特异性磁共振血管造影(MRA)图像进行三维血管模型重建通常需要一些手动操作。本研究旨在建立基于深度学习的血管模型重建方法。制备40例颈内动脉瘤患者的飞行时间MRA,并使用阈值和区域生长方法构建三维血管模型。使用这些数据集,使用2DU-net进行监督深度学习以重建3D血管模型。使用训练数据集之外的20个MRA图像评估基于DL的血管分割的准确性。骰子系数作为模型精度的指标,使用基于DL的血管模型进行血流模拟。创建的DL模型可以在所有60个案例中成功重建三维模型。测试数据集中的骰子系数为0.859。值得注意的是,DL生成的模型证明了其即使对于大动脉瘤(直径>10mm)也是有效的。重建的模型在进行血流模拟以辅助临床决策方面是可行的。我们的基于DL的方法可以成功地重建具有中等精度的三维血管模型。未来的研究有必要证明基于DL的技术可以促进医学图像处理。
    Three-dimensional vessel model reconstruction from patient-specific magnetic resonance angiography (MRA) images often requires some manual maneuvers. This study aimed to establish the deep learning (DL)-based method for vessel model reconstruction. Time of flight MRA of 40 patients with internal carotid artery aneurysms was prepared, and three-dimensional vessel models were constructed using the threshold and region-growing method. Using those datasets, supervised deep learning using 2D U-net was performed to reconstruct 3D vessel models. The accuracy of the DL-based vessel segmentations was assessed using 20 MRA images outside the training dataset. The dice coefficient was used as the indicator of the model accuracy, and the blood flow simulation was performed using the DL-based vessel model. The created DL model could successfully reconstruct a three-dimensional model in all 60 cases. The dice coefficient in the test dataset was 0.859. Of note, the DL-generated model proved its efficacy even for large aneurysms (> 10 mm in their diameter). The reconstructed model was feasible in performing blood flow simulation to assist clinical decision-making. Our DL-based method could successfully reconstruct a three-dimensional vessel model with moderate accuracy. Future studies are warranted to exhibit that DL-based technology can promote medical image processing.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    心血管疾病是中国的头号死因。心血管图像的手动分割,容易出错,需要一个自动化的,快速,和临床诊断的精确解决方案。
    本文重点介绍了深度学习在自动心血管图像分割中的应用,有效识别感兴趣的像素区域,以辅助诊断和研究心血管疾病。
    在我们的研究中,我们引入了创新的区域加权融合(RWF)和形状特征细化(SFR)模块,利用极化自注意在多尺度特征集成和形状微调中显著提高性能。RWF模块包括重塑,重量计算,和特征融合,增强高分辨率注意力计算,减少信息损失。通过损失函数进行模型优化为心血管医学图像处理提供了更可靠的解决方案。
    我们的方法在分割精度方面表现出色,强调RWF模块的重要作用。在心血管图像分割方面表现突出,可能提高临床实践标准。
    我们的方法确保了可靠的医学图像处理,指导心血管细分,以促进未来在实际医疗保健方面的进步,并为加强疾病诊断和治疗做出科学贡献。
    UNASSIGNED: Cardiovascular diseases are the top cause of death in China. Manual segmentation of cardiovascular images, prone to errors, demands an automated, rapid, and precise solution for clinical diagnosis.
    UNASSIGNED: The paper highlights deep learning in automatic cardiovascular image segmentation, efficiently identifying pixel regions of interest for auxiliary diagnosis and research in cardiovascular diseases.
    UNASSIGNED: In our study, we introduce innovative Region Weighted Fusion (RWF) and Shape Feature Refinement (SFR) modules, utilizing polarized self-attention for significant performance improvement in multiscale feature integration and shape fine-tuning. The RWF module includes reshaping, weight computation, and feature fusion, enhancing high-resolution attention computation and reducing information loss. Model optimization through loss functions offers a more reliable solution for cardiovascular medical image processing.
    UNASSIGNED: Our method excels in segmentation accuracy, emphasizing the vital role of the RWF module. It demonstrates outstanding performance in cardiovascular image segmentation, potentially raising clinical practice standards.
    UNASSIGNED: Our method ensures reliable medical image processing, guiding cardiovascular segmentation for future advancements in practical healthcare and contributing scientifically to enhanced disease diagnosis and treatment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: English Abstract
    Medical image registration plays an important role in medical diagnosis and treatment planning. However, the current registration methods based on deep learning still face some challenges, such as insufficient ability to extract global information, large number of network model parameters, slow reasoning speed and so on. Therefore, this paper proposed a new model LCU-Net, which used parallel lightweight convolution to improve the ability of global information extraction. The problem of large number of network parameters and slow inference speed was solved by multi-scale fusion. The experimental results showed that the Dice coefficient of LCU-Net reached 0.823, the Hausdorff distance was 1.258, and the number of network parameters was reduced by about one quarter compared with that before multi-scale fusion. The proposed algorithm shows remarkable advantages in medical image registration tasks, and it not only surpasses the existing comparison algorithms in performance, but also has excellent generalization performance and wide application prospects.
    医学图像配准在医疗诊断和治疗规划等领域具有重要意义。然而,当前基于深度学习的配准方法仍然面临着一些挑战,如对全局信息提取能力不足、网络模型参数量大、推理速度慢等问题。为此,本文提出了一种新的模型LCU-Net,采用并行轻量化卷积以提升全局信息的提取能力;通过多尺度融合来解决网络参数量大和推理速度慢的问题。实验结果显示,LCU-Net的Dice系数达到0.823,Hausdorff距离为1.258,网络参数量相对于多尺度融合之前减少了约四分之一。本文提出的算法在医学图像配准任务中表现出显著优势,不仅在性能上超越了现有的对比算法,而且具有出色的泛化性能以及广泛的应用前景。.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号